Test Report: KVM_Linux 17530

                    
                      407c000c6ef102291334b045d18fa6346a5c54cd:2023-10-31:31689
                    
                

Test fail (8/321)

x
+
TestMultiNode/serial/FreshStart2Nodes (113.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-441410 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1031 17:55:23.267468  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:55:23.806402  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:34.047173  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:54.527855  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:56:15.983204  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:56:35.488960  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:56:43.670067  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-441410 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : exit status 80 (1m50.980815955s)

                                                
                                                
-- stdout --
	* [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node multinode-441410 in cluster multinode-441410
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting worker node multinode-441410-m02 in cluster multinode-441410
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.168.39.206
	* Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	  - env NO_PROXY=192.168.39.206
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0xc0013cf4f0 https:0xc0013cf540] Dir:false
ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	* 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-441410 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.040401735s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p json-output-error-680388    | json-output-error-680388 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:52 UTC |                     |
	|         | --memory=2200 --output=json    |                          |         |                |                     |                     |
	|         | --wait=true --driver=fail      |                          |         |                |                     |                     |
	| delete  | -p json-output-error-680388    | json-output-error-680388 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:52 UTC | 31 Oct 23 17:52 UTC |
	| start   | -p first-585449 --driver=kvm2  | first-585449             | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:52 UTC | 31 Oct 23 17:52 UTC |
	| start   | -p second-588431 --driver=kvm2 | second-588431            | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:52 UTC | 31 Oct 23 17:53 UTC |
	|         |                                |                          |         |                |                     |                     |
	| delete  | -p second-588431               | second-588431            | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:53 UTC | 31 Oct 23 17:53 UTC |
	| delete  | -p first-585449                | first-585449             | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:53 UTC | 31 Oct 23 17:53 UTC |
	| start   | -p mount-start-1-422707        | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:53 UTC | 31 Oct 23 17:54 UTC |
	|         | --memory=2048 --mount          |                          |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize    |                          |         |                |                     |                     |
	|         | 6543 --mount-port 46464        |                          |         |                |                     |                     |
	|         | --mount-uid 0 --no-kubernetes  |                          |         |                |                     |                     |
	|         | --driver=kvm2                  |                          |         |                |                     |                     |
	| mount   | /home/jenkins:/minikube-host   | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC |                     |
	|         | --profile mount-start-1-422707 |                          |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L    |                          |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543     |                          |         |                |                     |                     |
	|         | --port 46464 --type 9p --uid 0 |                          |         |                |                     |                     |
	| ssh     | mount-start-1-422707 ssh -- ls | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | /minikube-host                 |                          |         |                |                     |                     |
	| ssh     | mount-start-1-422707 ssh --    | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | mount | grep 9p                |                          |         |                |                     |                     |
	| start   | -p mount-start-2-444347        | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | --memory=2048 --mount          |                          |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize    |                          |         |                |                     |                     |
	|         | 6543 --mount-port 46465        |                          |         |                |                     |                     |
	|         | --mount-uid 0 --no-kubernetes  |                          |         |                |                     |                     |
	|         | --driver=kvm2                  |                          |         |                |                     |                     |
	| mount   | /home/jenkins:/minikube-host   | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC |                     |
	|         | --profile mount-start-2-444347 |                          |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L    |                          |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543     |                          |         |                |                     |                     |
	|         | --port 46465 --type 9p --uid 0 |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh -- ls | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | /minikube-host                 |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh --    | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | mount | grep 9p                |                          |         |                |                     |                     |
	| delete  | -p mount-start-1-422707        | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | --alsologtostderr -v=5         |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh -- ls | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | /minikube-host                 |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh --    | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	|         | mount | grep 9p                |                          |         |                |                     |                     |
	| stop    | -p mount-start-2-444347        | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:54 UTC |
	| start   | -p mount-start-2-444347        | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:54 UTC | 31 Oct 23 17:55 UTC |
	| mount   | /home/jenkins:/minikube-host   | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --profile mount-start-2-444347 |                          |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L    |                          |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543     |                          |         |                |                     |                     |
	|         | --port 46465 --type 9p --uid 0 |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh -- ls | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	|         | /minikube-host                 |                          |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh --    | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	|         | mount | grep 9p                |                          |         |                |                     |                     |
	| delete  | -p mount-start-2-444347        | mount-start-2-444347     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	| delete  | -p mount-start-1-422707        | mount-start-1-422707     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	| start   | -p multinode-441410            | multinode-441410         | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --wait=true --memory=2200      |                          |         |                |                     |                     |
	|         | --nodes=2 -v=8                 |                          |         |                |                     |                     |
	|         | --alsologtostderr              |                          |         |                |                     |                     |
	|         | --driver=kvm2                  |                          |         |                |                     |                     |
	|---------|--------------------------------|--------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 17:57:11 UTC. --
	Oct 31 17:56:19 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:19.965165865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:23 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6400c9ed90ae36fd3f2ebe0bbcc74b7cb538bb6f9027126b2219e7c60dd7d48d/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:27 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:27Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Oct 31 17:56:27 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:27.194943424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:27 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:27.195061609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:27 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:27.195094235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:27 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:27.195108115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808552198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808635238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808675596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	74195b9ce8448       6e38f40d628db                                                                              40 seconds ago       Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                              40 seconds ago       Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   44 seconds ago       Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                              52 seconds ago       Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                              About a minute ago   Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                              About a minute ago   Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                              About a minute ago   Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                              About a minute ago   Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 17:57:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 17:56:37 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 17:56:37 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 17:56:37 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 17:56:37 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     52s
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      52s
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           53s                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                41s                kubelet          Node multinode-441410 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.81786Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","added-peer-id":"8d50a8842d8d7ae5","added-peer-peer-urls":["https://192.168.39.206:2380"]}
	{"level":"info","ts":"2023-10-31T17:56:00.849503Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-31T17:56:00.849694Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  17:57:11 up 1 min,  0 users,  load average: 0.78, 0.36, 0.13
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 17:56:27.500973       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1031 17:56:27.501062       1 main.go:107] hostIP = 192.168.39.206
	podIP = 192.168.39.206
	I1031 17:56:27.501238       1 main.go:116] setting mtu 1500 for CNI 
	I1031 17:56:27.501325       1 main.go:146] kindnetd IP family: "ipv4"
	I1031 17:56:27.501344       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1031 17:56:27.900000       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 17:56:27.900058       1 main.go:227] handling current node
	I1031 17:56:37.914124       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 17:56:37.914446       1 main.go:227] handling current node
	I1031 17:56:47.927872       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 17:56:47.927948       1 main.go:227] handling current node
	I1031 17:56:57.932854       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 17:56:57.933012       1 main.go:227] handling current node
	I1031 17:57:07.947867       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 17:57:07.947915       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:18.222201       1 shared_informer.go:318] Caches are synced for cronjob
	I1031 17:56:18.234469       1 shared_informer.go:318] Caches are synced for resource quota
	I1031 17:56:18.609038       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 17:56:18.641758       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1031 17:56:18.679653       1 shared_informer.go:318] Caches are synced for garbage collector
	I1031 17:56:18.679715       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1031 17:56:19.007192       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tbl8r"
	I1031 17:56:19.019632       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6rrkf"
	I1031 17:56:19.207349       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lwggp"
	I1031 17:56:19.221066       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.234948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="594.089879ms"
	I1031 17:56:19.254141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.095117ms"
	I1031 17:56:19.254510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="276.68µs"
	I1031 17:56:19.254998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.867µs"
	I1031 17:56:19.630954       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1031 17:56:19.680357       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 17:57:11 UTC. --
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.047396    2461 topology_manager.go:215] "Topology Admit Handler" podUID="6c0f54ca-e87f-4d58-a609-41877ec4be36" podNamespace="kube-system" podName="kube-proxy-tbl8r"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.054980    2461 topology_manager.go:215] "Topology Admit Handler" podUID="ee7915c4-6d8d-49d1-9e06-12fe2d3aad54" podNamespace="kube-system" podName="kindnet-6rrkf"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.146388    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee7915c4-6d8d-49d1-9e06-12fe2d3aad54-cni-cfg\") pod \"kindnet-6rrkf\" (UID: \"ee7915c4-6d8d-49d1-9e06-12fe2d3aad54\") " pod="kube-system/kindnet-6rrkf"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.146755    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee7915c4-6d8d-49d1-9e06-12fe2d3aad54-xtables-lock\") pod \"kindnet-6rrkf\" (UID: \"ee7915c4-6d8d-49d1-9e06-12fe2d3aad54\") " pod="kube-system/kindnet-6rrkf"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.146857    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c0f54ca-e87f-4d58-a609-41877ec4be36-kube-proxy\") pod \"kube-proxy-tbl8r\" (UID: \"6c0f54ca-e87f-4d58-a609-41877ec4be36\") " pod="kube-system/kube-proxy-tbl8r"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.146940    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdmt\" (UniqueName: \"kubernetes.io/projected/6c0f54ca-e87f-4d58-a609-41877ec4be36-kube-api-access-rzdmt\") pod \"kube-proxy-tbl8r\" (UID: \"6c0f54ca-e87f-4d58-a609-41877ec4be36\") " pod="kube-system/kube-proxy-tbl8r"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.147010    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee7915c4-6d8d-49d1-9e06-12fe2d3aad54-lib-modules\") pod \"kindnet-6rrkf\" (UID: \"ee7915c4-6d8d-49d1-9e06-12fe2d3aad54\") " pod="kube-system/kindnet-6rrkf"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.147080    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbrm\" (UniqueName: \"kubernetes.io/projected/ee7915c4-6d8d-49d1-9e06-12fe2d3aad54-kube-api-access-8fbrm\") pod \"kindnet-6rrkf\" (UID: \"ee7915c4-6d8d-49d1-9e06-12fe2d3aad54\") " pod="kube-system/kindnet-6rrkf"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.147146    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c0f54ca-e87f-4d58-a609-41877ec4be36-xtables-lock\") pod \"kube-proxy-tbl8r\" (UID: \"6c0f54ca-e87f-4d58-a609-41877ec4be36\") " pod="kube-system/kube-proxy-tbl8r"
	Oct 31 17:56:19 multinode-441410 kubelet[2461]: I1031 17:56:19.147208    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c0f54ca-e87f-4d58-a609-41877ec4be36-lib-modules\") pod \"kube-proxy-tbl8r\" (UID: \"6c0f54ca-e87f-4d58-a609-41877ec4be36\") " pod="kube-system/kube-proxy-tbl8r"
	Oct 31 17:56:23 multinode-441410 kubelet[2461]: I1031 17:56:23.242235    2461 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6400c9ed90ae36fd3f2ebe0bbcc74b7cb538bb6f9027126b2219e7c60dd7d48d"
	Oct 31 17:56:23 multinode-441410 kubelet[2461]: I1031 17:56:23.297503    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tbl8r" podStartSLOduration=5.29745439 podCreationTimestamp="2023-10-31 17:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-31 17:56:23.29534338 +0000 UTC m=+16.779949014" watchObservedRunningTime="2023-10-31 17:56:23.29745439 +0000 UTC m=+16.782060010"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.312408    2461 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.353605    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6rrkf" podStartSLOduration=7.517705728 podCreationTimestamp="2023-10-31 17:56:19 +0000 UTC" firstStartedPulling="2023-10-31 17:56:23.247030464 +0000 UTC m=+16.731636077" lastFinishedPulling="2023-10-31 17:56:27.082822833 +0000 UTC m=+20.567428446" observedRunningTime="2023-10-31 17:56:28.385210671 +0000 UTC m=+21.869816291" watchObservedRunningTime="2023-10-31 17:56:30.353498097 +0000 UTC m=+23.838103718"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.354211    2461 topology_manager.go:215] "Topology Admit Handler" podUID="13e0e515-f978-4945-abf2-8224996d04b7" podNamespace="kube-system" podName="coredns-5dd5756b68-lwggp"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.366855    2461 topology_manager.go:215] "Topology Admit Handler" podUID="24199518-9184-4f82-a011-afe05284ce89" podNamespace="kube-system" podName="storage-provisioner"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.439492    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/24199518-9184-4f82-a011-afe05284ce89-tmp\") pod \"storage-provisioner\" (UID: \"24199518-9184-4f82-a011-afe05284ce89\") " pod="kube-system/storage-provisioner"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.439557    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13e0e515-f978-4945-abf2-8224996d04b7-config-volume\") pod \"coredns-5dd5756b68-lwggp\" (UID: \"13e0e515-f978-4945-abf2-8224996d04b7\") " pod="kube-system/coredns-5dd5756b68-lwggp"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.439585    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkzc\" (UniqueName: \"kubernetes.io/projected/13e0e515-f978-4945-abf2-8224996d04b7-kube-api-access-kpkzc\") pod \"coredns-5dd5756b68-lwggp\" (UID: \"13e0e515-f978-4945-abf2-8224996d04b7\") " pod="kube-system/coredns-5dd5756b68-lwggp"
	Oct 31 17:56:30 multinode-441410 kubelet[2461]: I1031 17:56:30.439608    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29zb\" (UniqueName: \"kubernetes.io/projected/24199518-9184-4f82-a011-afe05284ce89-kube-api-access-z29zb\") pod \"storage-provisioner\" (UID: \"24199518-9184-4f82-a011-afe05284ce89\") " pod="kube-system/storage-provisioner"
	Oct 31 17:56:32 multinode-441410 kubelet[2461]: I1031 17:56:32.699204    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lwggp" podStartSLOduration=13.69916462 podCreationTimestamp="2023-10-31 17:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-31 17:56:32.680243282 +0000 UTC m=+26.164848903" watchObservedRunningTime="2023-10-31 17:56:32.69916462 +0000 UTC m=+26.183770234"
	Oct 31 17:57:06 multinode-441410 kubelet[2461]: E1031 17:57:06.812251    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 17:57:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 17:57:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 17:57:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [74195b9ce844] <==
	* I1031 17:56:31.688139       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 17:56:31.704020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 17:56:31.704452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 17:56:31.715827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 17:56:31.716754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8daaae3b-4ad0-49b1-a652-0df686e74f34", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1 became leader
	I1031 17:56:31.716943       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	I1031 17:56:31.819463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (113.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (685.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- rollout status deployment/busybox
E1031 17:57:57.410252  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:00:13.565938  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:00:23.270348  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 18:00:41.251318  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:01:15.984238  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 18:01:46.312138  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 18:05:13.565363  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:05:23.270960  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 18:06:15.984567  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-441410 -- rollout status deployment/busybox: exit status 1 (10m4.359525261s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E1031 18:07:39.031276  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:512: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.io: exit status 1 (130.868922ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-5bc68d56bd-67pbp does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:526: Pod busybox-5bc68d56bd-67pbp could not resolve 'kubernetes.io': exit status 1
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-682nc -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.default: exit status 1 (129.270063ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-5bc68d56bd-67pbp does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:536: Pod busybox-5bc68d56bd-67pbp could not resolve 'kubernetes.default': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-682nc -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (139.919752ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-5bc68d56bd-67pbp does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:544: Pod busybox-5bc68d56bd-67pbp could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-682nc -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.052775945s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-444347 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --profile mount-start-2-444347                    |                      |         |                |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |                |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |                |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh -- ls                    | mount-start-2-444347 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| ssh     | mount-start-2-444347 ssh --                       | mount-start-2-444347 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	|         | mount | grep 9p                                   |                      |         |                |                     |                     |
	| delete  | -p mount-start-2-444347                           | mount-start-2-444347 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	| delete  | -p mount-start-1-422707                           | mount-start-1-422707 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	| start   | -p multinode-441410                               | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=kvm2                                     |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- apply -f                   | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC | 31 Oct 23 17:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- rollout                    | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:08:36 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         12 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              12 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         12 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         12 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         12 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         12 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         12 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:08:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-441410 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:08:36 up 13 min,  0 users,  load average: 0.37, 0.37, 0.22
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:06:28.467730       1 main.go:227] handling current node
	I1031 18:06:38.472488       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:06:38.472517       1 main.go:227] handling current node
	I1031 18:06:48.484994       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:06:48.485044       1 main.go:227] handling current node
	I1031 18:06:58.499446       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:06:58.499472       1 main.go:227] handling current node
	I1031 18:07:08.505017       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:08.505067       1 main.go:227] handling current node
	I1031 18:07:18.517082       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:18.517134       1 main.go:227] handling current node
	I1031 18:07:28.529885       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:28.529940       1 main.go:227] handling current node
	I1031 18:07:38.543119       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:38.543178       1 main.go:227] handling current node
	I1031 18:07:48.556905       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:48.556945       1 main.go:227] handling current node
	I1031 18:07:58.561390       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:58.561442       1 main.go:227] handling current node
	I1031 18:08:08.570102       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:08.570156       1 main.go:227] handling current node
	I1031 18:08:18.574514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:18.574630       1 main.go:227] handling current node
	I1031 18:08:28.579833       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:28.579881       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:19.221066       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.234948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="594.089879ms"
	I1031 17:56:19.254141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.095117ms"
	I1031 17:56:19.254510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="276.68µs"
	I1031 17:56:19.254998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.867µs"
	I1031 17:56:19.630954       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1031 17:56:19.680357       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:08:36 UTC. --
	Oct 31 18:02:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:03:06 multinode-441410 kubelet[2461]: E1031 18:03:06.810213    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:04:06 multinode-441410 kubelet[2461]: E1031 18:04:06.811886    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [74195b9ce844] <==
	* I1031 17:56:31.688139       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 17:56:31.704020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 17:56:31.704452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 17:56:31.715827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 17:56:31.716754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8daaae3b-4ad0-49b1-a652-0df686e74f34", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1 became leader
	I1031 17:56:31.716943       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	I1031 17:56:31.819463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  60s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (685.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-67pbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (133.498658ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-5bc68d56bd-67pbp does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:562: Pod busybox-5bc68d56bd-67pbp could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-682nc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-441410 -- exec busybox-5bc68d56bd-682nc -- sh -c "ping -c 1 192.168.39.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.003888779s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p mount-start-1-422707                           | mount-start-1-422707 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC | 31 Oct 23 17:55 UTC |
	| start   | -p multinode-441410                               | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=kvm2                                     |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- apply -f                   | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC | 31 Oct 23 17:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- rollout                    | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410     | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:08:38 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         12 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              12 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         12 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         12 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         12 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         12 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         12 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	[INFO] 10.244.0.3:60205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093356s
	[INFO] 10.244.0.3:39779 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119857s
	[INFO] 10.244.0.3:45984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097797s
	[INFO] 10.244.0.3:59468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:08:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-441410 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:08:39 up 13 min,  0 users,  load average: 0.34, 0.36, 0.22
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:06:38.472517       1 main.go:227] handling current node
	I1031 18:06:48.484994       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:06:48.485044       1 main.go:227] handling current node
	I1031 18:06:58.499446       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:06:58.499472       1 main.go:227] handling current node
	I1031 18:07:08.505017       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:08.505067       1 main.go:227] handling current node
	I1031 18:07:18.517082       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:18.517134       1 main.go:227] handling current node
	I1031 18:07:28.529885       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:28.529940       1 main.go:227] handling current node
	I1031 18:07:38.543119       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:38.543178       1 main.go:227] handling current node
	I1031 18:07:48.556905       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:48.556945       1 main.go:227] handling current node
	I1031 18:07:58.561390       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:58.561442       1 main.go:227] handling current node
	I1031 18:08:08.570102       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:08.570156       1 main.go:227] handling current node
	I1031 18:08:18.574514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:18.574630       1 main.go:227] handling current node
	I1031 18:08:28.579833       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:28.579881       1 main.go:227] handling current node
	I1031 18:08:38.594754       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:38.594784       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:19.221066       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.234948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="594.089879ms"
	I1031 17:56:19.254141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.095117ms"
	I1031 17:56:19.254510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="276.68µs"
	I1031 17:56:19.254998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.867µs"
	I1031 17:56:19.630954       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1031 17:56:19.680357       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qkwvs"
	I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:08:39 UTC. --
	Oct 31 18:02:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:03:06 multinode-441410 kubelet[2461]: E1031 18:03:06.810213    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:04:06 multinode-441410 kubelet[2461]: E1031 18:04:06.811886    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [74195b9ce844] <==
	* I1031 17:56:31.688139       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 17:56:31.704020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 17:56:31.704452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 17:56:31.715827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 17:56:31.716754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8daaae3b-4ad0-49b1-a652-0df686e74f34", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1 became leader
	I1031 17:56:31.716943       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	I1031 17:56:31.819463       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-441410_650bc7b2-45fa-4685-aed2-1a9538f80de1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  63s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.63s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-441410 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-441410 -v 3 --alsologtostderr: (51.756346143s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr
multinode_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr: exit status 2 (620.541105ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-441410-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-441410-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:09:31.942003  266009 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:09:31.942343  266009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:31.942355  266009 out.go:309] Setting ErrFile to fd 2...
	I1031 18:09:31.942360  266009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:31.942535  266009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 18:09:31.942702  266009 out.go:303] Setting JSON to false
	I1031 18:09:31.942740  266009 mustload.go:65] Loading cluster: multinode-441410
	I1031 18:09:31.942904  266009 notify.go:220] Checking for updates...
	I1031 18:09:31.943135  266009 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:09:31.943151  266009 status.go:255] checking status of multinode-441410 ...
	I1031 18:09:31.943612  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:31.943679  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:31.958730  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I1031 18:09:31.959189  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:31.959772  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:31.959795  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:31.960247  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:31.960452  266009 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 18:09:31.962381  266009 status.go:330] multinode-441410 host status = "Running" (err=<nil>)
	I1031 18:09:31.962402  266009 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:31.962900  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:31.962961  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:31.978378  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36357
	I1031 18:09:31.978901  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:31.979539  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:31.979576  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:31.980008  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:31.980195  266009 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 18:09:31.983233  266009 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:31.983695  266009 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:31.983736  266009 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:31.983858  266009 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:31.984157  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:31.984204  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:31.999775  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35547
	I1031 18:09:32.000365  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.000901  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.000930  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.001363  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.001605  266009 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 18:09:32.001883  266009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:32.001917  266009 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 18:09:32.005192  266009 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:32.005717  266009 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:32.005768  266009 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:32.005978  266009 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 18:09:32.006205  266009 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 18:09:32.006380  266009 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 18:09:32.006536  266009 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 18:09:32.095926  266009 ssh_runner.go:195] Run: systemctl --version
	I1031 18:09:32.102269  266009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:32.119126  266009 kubeconfig.go:92] found "multinode-441410" server: "https://192.168.39.206:8443"
	I1031 18:09:32.119161  266009 api_server.go:166] Checking apiserver status ...
	I1031 18:09:32.119196  266009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:09:32.134432  266009 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1894/cgroup
	I1031 18:09:32.146157  266009 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e"
	I1031 18:09:32.146229  266009 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e/freezer.state
	I1031 18:09:32.157010  266009 api_server.go:204] freezer state: "THAWED"
	I1031 18:09:32.157046  266009 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 18:09:32.161855  266009 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 18:09:32.161887  266009 status.go:421] multinode-441410 apiserver status = Running (err=<nil>)
	I1031 18:09:32.161897  266009 status.go:257] multinode-441410 status: &{Name:multinode-441410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:32.161915  266009 status.go:255] checking status of multinode-441410-m02 ...
	I1031 18:09:32.162270  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.162314  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.177111  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I1031 18:09:32.177628  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.178129  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.178155  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.178546  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.178742  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 18:09:32.180525  266009 status.go:330] multinode-441410-m02 host status = "Running" (err=<nil>)
	I1031 18:09:32.180544  266009 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:32.180833  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.180875  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.196912  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I1031 18:09:32.197355  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.197816  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.197836  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.198262  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.198481  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 18:09:32.201669  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:32.202149  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:32.202200  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:32.202356  266009 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:32.202659  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.202704  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.217767  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41609
	I1031 18:09:32.218389  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.218962  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.218991  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.219308  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.219499  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 18:09:32.219747  266009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:32.219771  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 18:09:32.222473  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:32.222959  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:32.222996  266009 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:32.223163  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 18:09:32.223364  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 18:09:32.223519  266009 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 18:09:32.223664  266009 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 18:09:32.319143  266009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:32.336095  266009 status.go:257] multinode-441410-m02 status: &{Name:multinode-441410-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:32.336135  266009 status.go:255] checking status of multinode-441410-m03 ...
	I1031 18:09:32.336503  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.336555  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.351410  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I1031 18:09:32.351896  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.352356  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.352382  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.352761  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.353007  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetState
	I1031 18:09:32.354749  266009 status.go:330] multinode-441410-m03 host status = "Running" (err=<nil>)
	I1031 18:09:32.354768  266009 host.go:66] Checking if "multinode-441410-m03" exists ...
	I1031 18:09:32.355174  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.355218  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.369758  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45131
	I1031 18:09:32.370254  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.370779  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.370807  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.371154  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.371344  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetIP
	I1031 18:09:32.373915  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:32.374418  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:4b:9a", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 19:08:55 +0000 UTC Type:0 Mac:52:54:00:55:4b:9a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-441410-m03 Clientid:01:52:54:00:55:4b:9a}
	I1031 18:09:32.374452  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:32.374603  266009 host.go:66] Checking if "multinode-441410-m03" exists ...
	I1031 18:09:32.374920  266009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:32.374965  266009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:32.390301  266009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I1031 18:09:32.390781  266009 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:32.391271  266009 main.go:141] libmachine: Using API Version  1
	I1031 18:09:32.391289  266009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:32.391729  266009 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:32.391948  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .DriverName
	I1031 18:09:32.392163  266009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:32.392186  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHHostname
	I1031 18:09:32.395097  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:32.395544  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:4b:9a", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 19:08:55 +0000 UTC Type:0 Mac:52:54:00:55:4b:9a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-441410-m03 Clientid:01:52:54:00:55:4b:9a}
	I1031 18:09:32.395579  266009 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:32.395726  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHPort
	I1031 18:09:32.395948  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHKeyPath
	I1031 18:09:32.396102  266009 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHUsername
	I1031 18:09:32.396365  266009 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m03/id_rsa Username:docker}
	I1031 18:09:32.481131  266009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:32.496866  266009 status.go:257] multinode-441410-m03 status: &{Name:multinode-441410-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:118: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.010593006s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| start   | -p multinode-441410                               | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --wait=true --memory=2200                         |                  |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |                |                     |                     |
	|         | --alsologtostderr                                 |                  |         |                |                     |                     |
	|         | --driver=kvm2                                     |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- apply -f                   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC | 31 Oct 23 17:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- rollout                    | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox                         |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- sh                    |                  |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                  |         |                |                     |                     |
	| node    | add -p multinode-441410 -v 3                      | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:09 UTC |
	|         | --alsologtostderr                                 |                  |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:33 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         13 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              13 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         13 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         13 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         13 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         13 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         13 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	[INFO] 10.244.0.3:60205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093356s
	[INFO] 10.244.0.3:39779 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119857s
	[INFO] 10.244.0.3:45984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097797s
	[INFO] 10.244.0.3:59468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-441410 status is now: NodeReady
	
	
	Name:               multinode-441410-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 18:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-441410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b3d12434efc4b28b1f56666426107d6
	  System UUID:                2b3d1243-4efc-4b28-b1f5-6666426107d6
	  Boot ID:                    5adda0f0-d573-4bed-8f66-685fc9152dac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9hq7l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-c9rvt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x5 over 20s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x5 over 20s)  kubelet          Node multinode-441410-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x5 over 20s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s                node-controller  Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller
	  Normal  NodeReady                3s                 kubelet          Node multinode-441410-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:09:33 up 14 min,  0 users,  load average: 0.29, 0.33, 0.21
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:07:58.561390       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:58.561442       1 main.go:227] handling current node
	I1031 18:08:08.570102       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:08.570156       1 main.go:227] handling current node
	I1031 18:08:18.574514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:18.574630       1 main.go:227] handling current node
	I1031 18:08:28.579833       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:28.579881       1 main.go:227] handling current node
	I1031 18:08:38.594754       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:38.594784       1 main.go:227] handling current node
	I1031 18:08:48.608633       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:48.608684       1 main.go:227] handling current node
	I1031 18:08:58.621071       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:58.621423       1 main.go:227] handling current node
	I1031 18:09:08.631544       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:08.631568       1 main.go:227] handling current node
	I1031 18:09:18.637175       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:18.637526       1 main.go:227] handling current node
	I1031 18:09:18.637616       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:18.637763       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:18.638179       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.127 Flags: [] Table: 0} 
	I1031 18:09:28.646550       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:28.646574       1 main.go:227] handling current node
	I1031 18:09:28.646588       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:28.646593       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	I1031 18:09:15.207912       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-441410-m03\" does not exist"
	I1031 18:09:15.231014       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-441410-m03" podCIDRs=["10.244.1.0/24"]
	I1031 18:09:15.237884       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9hq7l"
	I1031 18:09:15.237930       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c9rvt"
	I1031 18:09:18.211568       1 event.go:307] "Event occurred" object="multinode-441410-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller"
	I1031 18:09:18.212158       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-441410-m03"
	I1031 18:09:30.048381       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-441410-m03"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:33 UTC. --
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:04:06 multinode-441410 kubelet[2461]: E1031 18:04:06.811886    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:09:06 multinode-441410 kubelet[2461]: E1031 18:09:06.811861    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/AddNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  117s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (54.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status --output json --alsologtostderr: exit status 2 (619.174682ms)

                                                
                                                
-- stdout --
	[{"Name":"multinode-441410","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-441410-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-441410-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:09:34.790380  266215 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:09:34.790542  266215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:34.790557  266215 out.go:309] Setting ErrFile to fd 2...
	I1031 18:09:34.790565  266215 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:34.790770  266215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 18:09:34.790949  266215 out.go:303] Setting JSON to true
	I1031 18:09:34.790990  266215 mustload.go:65] Loading cluster: multinode-441410
	I1031 18:09:34.791126  266215 notify.go:220] Checking for updates...
	I1031 18:09:34.791590  266215 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:09:34.791613  266215 status.go:255] checking status of multinode-441410 ...
	I1031 18:09:34.792219  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:34.792296  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:34.812361  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I1031 18:09:34.812866  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:34.813552  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:34.813597  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:34.814020  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:34.814262  266215 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 18:09:34.816086  266215 status.go:330] multinode-441410 host status = "Running" (err=<nil>)
	I1031 18:09:34.816109  266215 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:34.816564  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:34.816624  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:34.834642  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I1031 18:09:34.835085  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:34.835609  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:34.835638  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:34.836018  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:34.836190  266215 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 18:09:34.839473  266215 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:34.839966  266215 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:34.840000  266215 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:34.840139  266215 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:34.840504  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:34.840552  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:34.856289  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I1031 18:09:34.856854  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:34.857457  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:34.857483  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:34.857826  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:34.858058  266215 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 18:09:34.858293  266215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:34.858335  266215 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 18:09:34.861577  266215 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:34.862085  266215 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:34.862127  266215 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:34.862240  266215 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 18:09:34.862443  266215 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 18:09:34.862623  266215 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 18:09:34.862797  266215 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 18:09:34.959447  266215 ssh_runner.go:195] Run: systemctl --version
	I1031 18:09:34.966313  266215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:34.980979  266215 kubeconfig.go:92] found "multinode-441410" server: "https://192.168.39.206:8443"
	I1031 18:09:34.981018  266215 api_server.go:166] Checking apiserver status ...
	I1031 18:09:34.981063  266215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:09:34.993804  266215 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1894/cgroup
	I1031 18:09:35.004590  266215 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e"
	I1031 18:09:35.004731  266215 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e/freezer.state
	I1031 18:09:35.015081  266215 api_server.go:204] freezer state: "THAWED"
	I1031 18:09:35.015119  266215 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 18:09:35.020365  266215 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 18:09:35.020395  266215 status.go:421] multinode-441410 apiserver status = Running (err=<nil>)
	I1031 18:09:35.020409  266215 status.go:257] multinode-441410 status: &{Name:multinode-441410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:35.020429  266215 status.go:255] checking status of multinode-441410-m02 ...
	I1031 18:09:35.020761  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.020811  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.035567  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39725
	I1031 18:09:35.036077  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.036611  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.036636  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.036952  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.037164  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 18:09:35.038935  266215 status.go:330] multinode-441410-m02 host status = "Running" (err=<nil>)
	I1031 18:09:35.038962  266215 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:35.039296  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.039340  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.054071  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I1031 18:09:35.054559  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.055009  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.055041  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.055399  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.055592  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 18:09:35.058670  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:35.059109  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:35.059146  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:35.059356  266215 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:35.059659  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.059703  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.074207  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
	I1031 18:09:35.074734  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.075247  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.075277  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.075596  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.075801  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 18:09:35.076027  266215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:35.076054  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 18:09:35.079316  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:35.079886  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:35.079936  266215 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:35.080139  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 18:09:35.082259  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 18:09:35.082501  266215 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 18:09:35.082710  266215 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 18:09:35.173131  266215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:35.187486  266215 status.go:257] multinode-441410-m02 status: &{Name:multinode-441410-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:35.187533  266215 status.go:255] checking status of multinode-441410-m03 ...
	I1031 18:09:35.187882  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.187936  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.203757  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I1031 18:09:35.204286  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.204762  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.204791  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.205234  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.205561  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetState
	I1031 18:09:35.207445  266215 status.go:330] multinode-441410-m03 host status = "Running" (err=<nil>)
	I1031 18:09:35.207465  266215 host.go:66] Checking if "multinode-441410-m03" exists ...
	I1031 18:09:35.207868  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.207917  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.224004  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I1031 18:09:35.224513  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.224970  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.224994  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.225299  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.225495  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetIP
	I1031 18:09:35.228266  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:35.228816  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:4b:9a", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 19:08:55 +0000 UTC Type:0 Mac:52:54:00:55:4b:9a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-441410-m03 Clientid:01:52:54:00:55:4b:9a}
	I1031 18:09:35.228842  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:35.229025  266215 host.go:66] Checking if "multinode-441410-m03" exists ...
	I1031 18:09:35.229345  266215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:35.229397  266215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:35.244724  266215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I1031 18:09:35.245176  266215 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:35.245642  266215 main.go:141] libmachine: Using API Version  1
	I1031 18:09:35.245666  266215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:35.245983  266215 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:35.246174  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .DriverName
	I1031 18:09:35.246376  266215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:35.246414  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHHostname
	I1031 18:09:35.248974  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:35.249429  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:4b:9a", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 19:08:55 +0000 UTC Type:0 Mac:52:54:00:55:4b:9a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-441410-m03 Clientid:01:52:54:00:55:4b:9a}
	I1031 18:09:35.249469  266215 main.go:141] libmachine: (multinode-441410-m03) DBG | domain multinode-441410-m03 has defined IP address 192.168.39.127 and MAC address 52:54:00:55:4b:9a in network mk-multinode-441410
	I1031 18:09:35.249578  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHPort
	I1031 18:09:35.249763  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHKeyPath
	I1031 18:09:35.249929  266215 main.go:141] libmachine: (multinode-441410-m03) Calling .GetSSHUsername
	I1031 18:09:35.250092  266215 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m03/id_rsa Username:docker}
	I1031 18:09:35.332854  266215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:35.346164  266215 status.go:257] multinode-441410-m03 status: &{Name:multinode-441410-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:175: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-441410 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.003507848s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| start   | -p multinode-441410                               | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:55 UTC |                     |
	|         | --wait=true --memory=2200                         |                  |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |                |                     |                     |
	|         | --alsologtostderr                                 |                  |         |                |                     |                     |
	|         | --driver=kvm2                                     |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- apply -f                   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC | 31 Oct 23 17:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- rollout                    | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox                         |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- sh                    |                  |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                  |         |                |                     |                     |
	| node    | add -p multinode-441410 -v 3                      | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:09 UTC |
	|         | --alsologtostderr                                 |                  |         |                |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:36 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         13 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              13 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         13 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         13 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         13 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         13 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         13 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	[INFO] 10.244.0.3:60205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093356s
	[INFO] 10.244.0.3:39779 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119857s
	[INFO] 10.244.0.3:45984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097797s
	[INFO] 10.244.0.3:59468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-441410 status is now: NodeReady
	
	
	Name:               multinode-441410-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 18:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-441410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b3d12434efc4b28b1f56666426107d6
	  System UUID:                2b3d1243-4efc-4b28-b1f5-6666426107d6
	  Boot ID:                    5adda0f0-d573-4bed-8f66-685fc9152dac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9hq7l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21s
	  kube-system                 kube-proxy-c9rvt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  21s (x5 over 23s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x5 over 23s)  kubelet          Node multinode-441410-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x5 over 23s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18s                node-controller  Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller
	  Normal  NodeReady                6s                 kubelet          Node multinode-441410-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:09:36 up 14 min,  0 users,  load average: 0.29, 0.33, 0.21
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:07:58.561390       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:07:58.561442       1 main.go:227] handling current node
	I1031 18:08:08.570102       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:08.570156       1 main.go:227] handling current node
	I1031 18:08:18.574514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:18.574630       1 main.go:227] handling current node
	I1031 18:08:28.579833       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:28.579881       1 main.go:227] handling current node
	I1031 18:08:38.594754       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:38.594784       1 main.go:227] handling current node
	I1031 18:08:48.608633       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:48.608684       1 main.go:227] handling current node
	I1031 18:08:58.621071       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:58.621423       1 main.go:227] handling current node
	I1031 18:09:08.631544       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:08.631568       1 main.go:227] handling current node
	I1031 18:09:18.637175       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:18.637526       1 main.go:227] handling current node
	I1031 18:09:18.637616       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:18.637763       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:18.638179       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.127 Flags: [] Table: 0} 
	I1031 18:09:28.646550       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:28.646574       1 main.go:227] handling current node
	I1031 18:09:28.646588       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:28.646593       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	I1031 18:09:15.207912       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-441410-m03\" does not exist"
	I1031 18:09:15.231014       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-441410-m03" podCIDRs=["10.244.1.0/24"]
	I1031 18:09:15.237884       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9hq7l"
	I1031 18:09:15.237930       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c9rvt"
	I1031 18:09:18.211568       1 event.go:307] "Event occurred" object="multinode-441410-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller"
	I1031 18:09:18.212158       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-441410-m03"
	I1031 18:09:30.048381       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-441410-m03"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:36 UTC. --
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:04:06 multinode-441410 kubelet[2461]: E1031 18:04:06.811886    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:09:06 multinode-441410 kubelet[2461]: E1031 18:09:06.811861    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  2m (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (2.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 node stop m03: (3.110735212s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status: exit status 7 (467.784468ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-441410-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-441410-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr: exit status 7 (479.165542ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-441410-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-441410-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:09:40.914720  266496 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:09:40.914976  266496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:40.914985  266496 out.go:309] Setting ErrFile to fd 2...
	I1031 18:09:40.914989  266496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:09:40.915209  266496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 18:09:40.915373  266496 out.go:303] Setting JSON to false
	I1031 18:09:40.915408  266496 mustload.go:65] Loading cluster: multinode-441410
	I1031 18:09:40.915472  266496 notify.go:220] Checking for updates...
	I1031 18:09:40.915774  266496 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:09:40.915790  266496 status.go:255] checking status of multinode-441410 ...
	I1031 18:09:40.916177  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:40.916239  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:40.936671  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I1031 18:09:40.937171  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:40.937811  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:40.937850  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:40.938270  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:40.938482  266496 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 18:09:40.940041  266496 status.go:330] multinode-441410 host status = "Running" (err=<nil>)
	I1031 18:09:40.940064  266496 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:40.940360  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:40.940412  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:40.956879  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46783
	I1031 18:09:40.957298  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:40.957760  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:40.957790  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:40.958179  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:40.958397  266496 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 18:09:40.961170  266496 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:40.961945  266496 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:40.961987  266496 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:40.962175  266496 host.go:66] Checking if "multinode-441410" exists ...
	I1031 18:09:40.962483  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:40.962529  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:40.977830  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I1031 18:09:40.978364  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:40.978906  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:40.978950  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:40.979293  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:40.979507  266496 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 18:09:40.979868  266496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:40.979905  266496 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 18:09:40.982948  266496 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:40.983285  266496 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 18:09:40.983314  266496 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 18:09:40.983535  266496 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 18:09:40.983745  266496 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 18:09:40.983907  266496 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 18:09:40.984052  266496 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 18:09:41.079560  266496 ssh_runner.go:195] Run: systemctl --version
	I1031 18:09:41.085307  266496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:41.099060  266496 kubeconfig.go:92] found "multinode-441410" server: "https://192.168.39.206:8443"
	I1031 18:09:41.099096  266496 api_server.go:166] Checking apiserver status ...
	I1031 18:09:41.099139  266496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:09:41.111803  266496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1894/cgroup
	I1031 18:09:41.122571  266496 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e"
	I1031 18:09:41.122670  266496 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf4f584a5c299b8b91cb08104ddd09da0/1cf5febbb4d5f5f667ac1bef6d4e3dc085a7eaf8ca81e7e615f868092514843e/freezer.state
	I1031 18:09:41.132174  266496 api_server.go:204] freezer state: "THAWED"
	I1031 18:09:41.132211  266496 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 18:09:41.139827  266496 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 18:09:41.139858  266496 status.go:421] multinode-441410 apiserver status = Running (err=<nil>)
	I1031 18:09:41.139868  266496 status.go:257] multinode-441410 status: &{Name:multinode-441410 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:41.139885  266496 status.go:255] checking status of multinode-441410-m02 ...
	I1031 18:09:41.140212  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:41.140260  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:41.156749  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I1031 18:09:41.157217  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:41.157810  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:41.157834  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:41.158251  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:41.158532  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 18:09:41.160303  266496 status.go:330] multinode-441410-m02 host status = "Running" (err=<nil>)
	I1031 18:09:41.160323  266496 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:41.160604  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:41.160645  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:41.175909  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I1031 18:09:41.176332  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:41.176855  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:41.176877  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:41.177253  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:41.177458  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 18:09:41.180451  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:41.180902  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:41.180937  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:41.181141  266496 host.go:66] Checking if "multinode-441410-m02" exists ...
	I1031 18:09:41.181482  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:41.181532  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:41.196873  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I1031 18:09:41.197336  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:41.197962  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:41.197996  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:41.198357  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:41.198545  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 18:09:41.198731  266496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:09:41.198754  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 18:09:41.201869  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:41.202335  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 18:09:41.202374  266496 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 18:09:41.202579  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 18:09:41.202766  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 18:09:41.202925  266496 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 18:09:41.203054  266496 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 18:09:41.297372  266496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:09:41.310976  266496 status.go:257] multinode-441410-m02 status: &{Name:multinode-441410-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:09:41.311015  266496 status.go:255] checking status of multinode-441410-m03 ...
	I1031 18:09:41.311324  266496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:09:41.311367  266496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:09:41.326462  266496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I1031 18:09:41.326962  266496 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:09:41.327455  266496 main.go:141] libmachine: Using API Version  1
	I1031 18:09:41.327479  266496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:09:41.327846  266496 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:09:41.328075  266496 main.go:141] libmachine: (multinode-441410-m03) Calling .GetState
	I1031 18:09:41.329944  266496 status.go:330] multinode-441410-m03 host status = "Stopped" (err=<nil>)
	I1031 18:09:41.329968  266496 status.go:343] host is not running, skipping remaining checks
	I1031 18:09:41.329976  266496 status.go:257] multinode-441410-m03 status: &{Name:multinode-441410-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr": multinode-441410
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-441410-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-441410-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:237: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr": multinode-441410
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-441410-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-441410-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.029281206s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| kubectl | -p multinode-441410 -- apply -f                   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC | 31 Oct 23 17:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- rollout                    | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox                         |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --                       |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup              |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o                | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc                          |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec                       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- sh                    |                  |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                  |         |                |                     |                     |
	| node    | add -p multinode-441410 -v 3                      | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:09 UTC |
	|         | --alsologtostderr                                 |                  |         |                |                     |                     |
	| node    | multinode-441410 node stop m03                    | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:09 UTC | 31 Oct 23 18:09 UTC |
	|---------|---------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:42 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         13 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              13 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         13 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         13 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         13 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         13 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         13 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	[INFO] 10.244.0.3:60205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093356s
	[INFO] 10.244.0.3:39779 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119857s
	[INFO] 10.244.0.3:45984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097797s
	[INFO] 10.244.0.3:59468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-441410 status is now: NodeReady
	
	
	Name:               multinode-441410-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 18:09:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:09:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:09:30 +0000   Tue, 31 Oct 2023 18:09:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-441410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b3d12434efc4b28b1f56666426107d6
	  System UUID:                2b3d1243-4efc-4b28-b1f5-6666426107d6
	  Boot ID:                    5adda0f0-d573-4bed-8f66-685fc9152dac
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9hq7l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27s
	  kube-system                 kube-proxy-c9rvt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientMemory  27s (x5 over 29s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x5 over 29s)  kubelet          Node multinode-441410-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x5 over 29s)  kubelet          Node multinode-441410-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-441410-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:09:42 up 14 min,  0 users,  load average: 0.43, 0.36, 0.22
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:08:18.574514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:18.574630       1 main.go:227] handling current node
	I1031 18:08:28.579833       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:28.579881       1 main.go:227] handling current node
	I1031 18:08:38.594754       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:38.594784       1 main.go:227] handling current node
	I1031 18:08:48.608633       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:48.608684       1 main.go:227] handling current node
	I1031 18:08:58.621071       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:08:58.621423       1 main.go:227] handling current node
	I1031 18:09:08.631544       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:08.631568       1 main.go:227] handling current node
	I1031 18:09:18.637175       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:18.637526       1 main.go:227] handling current node
	I1031 18:09:18.637616       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:18.637763       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:18.638179       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.127 Flags: [] Table: 0} 
	I1031 18:09:28.646550       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:28.646574       1 main.go:227] handling current node
	I1031 18:09:28.646588       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:28.646593       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:38.658930       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:38.658961       1 main.go:227] handling current node
	I1031 18:09:38.658979       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:38.658984       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:19.700507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.877092ms"
	I1031 17:56:19.722531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.945099ms"
	I1031 17:56:19.722972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.332µs"
	I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	I1031 18:09:15.207912       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-441410-m03\" does not exist"
	I1031 18:09:15.231014       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-441410-m03" podCIDRs=["10.244.1.0/24"]
	I1031 18:09:15.237884       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9hq7l"
	I1031 18:09:15.237930       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c9rvt"
	I1031 18:09:18.211568       1 event.go:307] "Event occurred" object="multinode-441410-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller"
	I1031 18:09:18.212158       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-441410-m03"
	I1031 18:09:30.048381       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-441410-m03"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:09:42 UTC. --
	Oct 31 18:03:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:04:06 multinode-441410 kubelet[2461]: E1031 18:04:06.811886    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:09:06 multinode-441410 kubelet[2461]: E1031 18:09:06.811861    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/StopNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m6s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (6.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 node start m03 --alsologtostderr
E1031 18:10:13.566031  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 node start m03 --alsologtostderr: (33.163953189s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status: exit status 2 (637.866634ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-441410-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-441410-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-441410 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-441410 -n multinode-441410
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 logs -n 25: (1.036243844s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| kubectl | -p multinode-441410 -- rollout       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:57 UTC |                     |
	|         | status deployment/busybox            |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:07 UTC | 31 Oct 23 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --          |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --          |                  |         |                |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp --          |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc --          |                  |         |                |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp -- nslookup |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- nslookup |                  |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- get pods -o   | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC |                     |
	|         | busybox-5bc68d56bd-67pbp             |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc             |                  |         |                |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |                |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |                |                     |                     |
	| kubectl | -p multinode-441410 -- exec          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:08 UTC |
	|         | busybox-5bc68d56bd-682nc -- sh       |                  |         |                |                     |                     |
	|         | -c ping -c 1 192.168.39.1            |                  |         |                |                     |                     |
	| node    | add -p multinode-441410 -v 3         | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:08 UTC | 31 Oct 23 18:09 UTC |
	|         | --alsologtostderr                    |                  |         |                |                     |                     |
	| node    | multinode-441410 node stop m03       | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:09 UTC | 31 Oct 23 18:09 UTC |
	| node    | multinode-441410 node start          | multinode-441410 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:09 UTC | 31 Oct 23 18:10 UTC |
	|         | m03 --alsologtostderr                |                  |         |                |                     |                     |
	|---------|--------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:55:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:55:19.332254  262782 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:55:19.332513  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332521  262782 out.go:309] Setting ErrFile to fd 2...
	I1031 17:55:19.332526  262782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:55:19.332786  262782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:55:19.333420  262782 out.go:303] Setting JSON to false
	I1031 17:55:19.334393  262782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5830,"bootTime":1698769090,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:55:19.334466  262782 start.go:138] virtualization: kvm guest
	I1031 17:55:19.337153  262782 out.go:177] * [multinode-441410] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:55:19.339948  262782 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:55:19.339904  262782 notify.go:220] Checking for updates...
	I1031 17:55:19.341981  262782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:55:19.343793  262782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:55:19.345511  262782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.347196  262782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:55:19.349125  262782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:55:19.350965  262782 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:55:19.390383  262782 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 17:55:19.392238  262782 start.go:298] selected driver: kvm2
	I1031 17:55:19.392262  262782 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:55:19.392278  262782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:55:19.393486  262782 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.393588  262782 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:55:19.409542  262782 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:55:19.409621  262782 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:55:19.409956  262782 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 17:55:19.410064  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:19.410086  262782 cni.go:136] 0 nodes found, recommending kindnet
	I1031 17:55:19.410099  262782 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1031 17:55:19.410115  262782 start_flags.go:323] config:
	{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:19.410333  262782 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:55:19.412532  262782 out.go:177] * Starting control plane node multinode-441410 in cluster multinode-441410
	I1031 17:55:19.414074  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:19.414126  262782 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:55:19.414140  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:55:19.414258  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:55:19.414274  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:55:19.414805  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:19.414841  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json: {Name:mkd54197469926d51fdbbde17b5339be20c167e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:19.415042  262782 start.go:365] acquiring machines lock for multinode-441410: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:55:19.415097  262782 start.go:369] acquired machines lock for "multinode-441410" in 32.484µs
	I1031 17:55:19.415125  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:55:19.415216  262782 start.go:125] createHost starting for "" (driver="kvm2")
	I1031 17:55:19.417219  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:55:19.417415  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:55:19.417489  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:55:19.432168  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1031 17:55:19.432674  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:55:19.433272  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:55:19.433296  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:55:19.433625  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:55:19.433867  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:19.434062  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:19.434218  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:55:19.434267  262782 client.go:168] LocalClient.Create starting
	I1031 17:55:19.434308  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:55:19.434359  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434390  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434470  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:55:19.434513  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:55:19.434537  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:55:19.434562  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:55:19.434590  262782 main.go:141] libmachine: (multinode-441410) Calling .PreCreateCheck
	I1031 17:55:19.435073  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:19.435488  262782 main.go:141] libmachine: Creating machine...
	I1031 17:55:19.435505  262782 main.go:141] libmachine: (multinode-441410) Calling .Create
	I1031 17:55:19.435668  262782 main.go:141] libmachine: (multinode-441410) Creating KVM machine...
	I1031 17:55:19.437062  262782 main.go:141] libmachine: (multinode-441410) DBG | found existing default KVM network
	I1031 17:55:19.438028  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.437857  262805 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b80}
	I1031 17:55:19.443902  262782 main.go:141] libmachine: (multinode-441410) DBG | trying to create private KVM network mk-multinode-441410 192.168.39.0/24...
	I1031 17:55:19.525645  262782 main.go:141] libmachine: (multinode-441410) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.525688  262782 main.go:141] libmachine: (multinode-441410) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:55:19.525703  262782 main.go:141] libmachine: (multinode-441410) DBG | private KVM network mk-multinode-441410 192.168.39.0/24 created
	I1031 17:55:19.525722  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.525539  262805 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.525748  262782 main.go:141] libmachine: (multinode-441410) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:55:19.765064  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.764832  262805 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa...
	I1031 17:55:19.911318  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911121  262805 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk...
	I1031 17:55:19.911356  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing magic tar header
	I1031 17:55:19.911370  262782 main.go:141] libmachine: (multinode-441410) DBG | Writing SSH key tar header
	I1031 17:55:19.911381  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:19.911287  262805 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 ...
	I1031 17:55:19.911394  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410
	I1031 17:55:19.911471  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410 (perms=drwx------)
	I1031 17:55:19.911505  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:55:19.911519  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:55:19.911546  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:55:19.911561  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:55:19.911575  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:55:19.911592  262782 main.go:141] libmachine: (multinode-441410) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:55:19.911605  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:55:19.911638  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:55:19.911655  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:55:19.911666  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:55:19.911678  262782 main.go:141] libmachine: (multinode-441410) DBG | Checking permissions on dir: /home
	I1031 17:55:19.911690  262782 main.go:141] libmachine: (multinode-441410) DBG | Skipping /home - not owner
	I1031 17:55:19.911786  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:19.912860  262782 main.go:141] libmachine: (multinode-441410) define libvirt domain using xml: 
	I1031 17:55:19.912876  262782 main.go:141] libmachine: (multinode-441410) <domain type='kvm'>
	I1031 17:55:19.912885  262782 main.go:141] libmachine: (multinode-441410)   <name>multinode-441410</name>
	I1031 17:55:19.912891  262782 main.go:141] libmachine: (multinode-441410)   <memory unit='MiB'>2200</memory>
	I1031 17:55:19.912899  262782 main.go:141] libmachine: (multinode-441410)   <vcpu>2</vcpu>
	I1031 17:55:19.912908  262782 main.go:141] libmachine: (multinode-441410)   <features>
	I1031 17:55:19.912918  262782 main.go:141] libmachine: (multinode-441410)     <acpi/>
	I1031 17:55:19.912932  262782 main.go:141] libmachine: (multinode-441410)     <apic/>
	I1031 17:55:19.912942  262782 main.go:141] libmachine: (multinode-441410)     <pae/>
	I1031 17:55:19.912956  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.912965  262782 main.go:141] libmachine: (multinode-441410)   </features>
	I1031 17:55:19.912975  262782 main.go:141] libmachine: (multinode-441410)   <cpu mode='host-passthrough'>
	I1031 17:55:19.912981  262782 main.go:141] libmachine: (multinode-441410)   
	I1031 17:55:19.912990  262782 main.go:141] libmachine: (multinode-441410)   </cpu>
	I1031 17:55:19.913049  262782 main.go:141] libmachine: (multinode-441410)   <os>
	I1031 17:55:19.913085  262782 main.go:141] libmachine: (multinode-441410)     <type>hvm</type>
	I1031 17:55:19.913098  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='cdrom'/>
	I1031 17:55:19.913111  262782 main.go:141] libmachine: (multinode-441410)     <boot dev='hd'/>
	I1031 17:55:19.913123  262782 main.go:141] libmachine: (multinode-441410)     <bootmenu enable='no'/>
	I1031 17:55:19.913135  262782 main.go:141] libmachine: (multinode-441410)   </os>
	I1031 17:55:19.913142  262782 main.go:141] libmachine: (multinode-441410)   <devices>
	I1031 17:55:19.913154  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='cdrom'>
	I1031 17:55:19.913188  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/boot2docker.iso'/>
	I1031 17:55:19.913211  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hdc' bus='scsi'/>
	I1031 17:55:19.913222  262782 main.go:141] libmachine: (multinode-441410)       <readonly/>
	I1031 17:55:19.913230  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913237  262782 main.go:141] libmachine: (multinode-441410)     <disk type='file' device='disk'>
	I1031 17:55:19.913247  262782 main.go:141] libmachine: (multinode-441410)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:55:19.913257  262782 main.go:141] libmachine: (multinode-441410)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/multinode-441410.rawdisk'/>
	I1031 17:55:19.913265  262782 main.go:141] libmachine: (multinode-441410)       <target dev='hda' bus='virtio'/>
	I1031 17:55:19.913271  262782 main.go:141] libmachine: (multinode-441410)     </disk>
	I1031 17:55:19.913279  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913304  262782 main.go:141] libmachine: (multinode-441410)       <source network='mk-multinode-441410'/>
	I1031 17:55:19.913323  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913334  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913340  262782 main.go:141] libmachine: (multinode-441410)     <interface type='network'>
	I1031 17:55:19.913350  262782 main.go:141] libmachine: (multinode-441410)       <source network='default'/>
	I1031 17:55:19.913358  262782 main.go:141] libmachine: (multinode-441410)       <model type='virtio'/>
	I1031 17:55:19.913367  262782 main.go:141] libmachine: (multinode-441410)     </interface>
	I1031 17:55:19.913373  262782 main.go:141] libmachine: (multinode-441410)     <serial type='pty'>
	I1031 17:55:19.913380  262782 main.go:141] libmachine: (multinode-441410)       <target port='0'/>
	I1031 17:55:19.913392  262782 main.go:141] libmachine: (multinode-441410)     </serial>
	I1031 17:55:19.913400  262782 main.go:141] libmachine: (multinode-441410)     <console type='pty'>
	I1031 17:55:19.913406  262782 main.go:141] libmachine: (multinode-441410)       <target type='serial' port='0'/>
	I1031 17:55:19.913415  262782 main.go:141] libmachine: (multinode-441410)     </console>
	I1031 17:55:19.913420  262782 main.go:141] libmachine: (multinode-441410)     <rng model='virtio'>
	I1031 17:55:19.913430  262782 main.go:141] libmachine: (multinode-441410)       <backend model='random'>/dev/random</backend>
	I1031 17:55:19.913438  262782 main.go:141] libmachine: (multinode-441410)     </rng>
	I1031 17:55:19.913444  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913451  262782 main.go:141] libmachine: (multinode-441410)     
	I1031 17:55:19.913466  262782 main.go:141] libmachine: (multinode-441410)   </devices>
	I1031 17:55:19.913478  262782 main.go:141] libmachine: (multinode-441410) </domain>
	I1031 17:55:19.913494  262782 main.go:141] libmachine: (multinode-441410) 
	I1031 17:55:19.918938  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:a8:1a:6f in network default
	I1031 17:55:19.919746  262782 main.go:141] libmachine: (multinode-441410) Ensuring networks are active...
	I1031 17:55:19.919779  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:19.920667  262782 main.go:141] libmachine: (multinode-441410) Ensuring network default is active
	I1031 17:55:19.921191  262782 main.go:141] libmachine: (multinode-441410) Ensuring network mk-multinode-441410 is active
	I1031 17:55:19.921920  262782 main.go:141] libmachine: (multinode-441410) Getting domain xml...
	I1031 17:55:19.922729  262782 main.go:141] libmachine: (multinode-441410) Creating domain...
	I1031 17:55:21.188251  262782 main.go:141] libmachine: (multinode-441410) Waiting to get IP...
	I1031 17:55:21.189112  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.189553  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.189651  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.189544  262805 retry.go:31] will retry after 253.551134ms: waiting for machine to come up
	I1031 17:55:21.445380  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.446013  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.446068  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.445963  262805 retry.go:31] will retry after 339.196189ms: waiting for machine to come up
	I1031 17:55:21.787255  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:21.787745  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:21.787820  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:21.787720  262805 retry.go:31] will retry after 327.624827ms: waiting for machine to come up
	I1031 17:55:22.116624  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.117119  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.117172  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.117092  262805 retry.go:31] will retry after 590.569743ms: waiting for machine to come up
	I1031 17:55:22.708956  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:22.709522  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:22.709557  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:22.709457  262805 retry.go:31] will retry after 529.327938ms: waiting for machine to come up
	I1031 17:55:23.240569  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:23.241037  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:23.241072  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:23.240959  262805 retry.go:31] will retry after 851.275698ms: waiting for machine to come up
	I1031 17:55:24.094299  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:24.094896  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:24.094920  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:24.094823  262805 retry.go:31] will retry after 1.15093211s: waiting for machine to come up
	I1031 17:55:25.247106  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:25.247599  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:25.247626  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:25.247539  262805 retry.go:31] will retry after 1.373860049s: waiting for machine to come up
	I1031 17:55:26.623256  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:26.623664  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:26.623692  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:26.623636  262805 retry.go:31] will retry after 1.485039137s: waiting for machine to come up
	I1031 17:55:28.111660  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:28.112328  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:28.112354  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:28.112293  262805 retry.go:31] will retry after 1.60937397s: waiting for machine to come up
	I1031 17:55:29.723598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:29.724147  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:29.724177  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:29.724082  262805 retry.go:31] will retry after 2.42507473s: waiting for machine to come up
	I1031 17:55:32.152858  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:32.153485  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:32.153513  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:32.153423  262805 retry.go:31] will retry after 3.377195305s: waiting for machine to come up
	I1031 17:55:35.532565  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:35.533082  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find current IP address of domain multinode-441410 in network mk-multinode-441410
	I1031 17:55:35.533102  262782 main.go:141] libmachine: (multinode-441410) DBG | I1031 17:55:35.533032  262805 retry.go:31] will retry after 4.45355341s: waiting for machine to come up
	I1031 17:55:39.988754  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989190  262782 main.go:141] libmachine: (multinode-441410) Found IP for machine: 192.168.39.206
	I1031 17:55:39.989225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has current primary IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:39.989243  262782 main.go:141] libmachine: (multinode-441410) Reserving static IP address...
	I1031 17:55:39.989595  262782 main.go:141] libmachine: (multinode-441410) DBG | unable to find host DHCP lease matching {name: "multinode-441410", mac: "52:54:00:74:db:aa", ip: "192.168.39.206"} in network mk-multinode-441410
	I1031 17:55:40.070348  262782 main.go:141] libmachine: (multinode-441410) DBG | Getting to WaitForSSH function...
	I1031 17:55:40.070381  262782 main.go:141] libmachine: (multinode-441410) Reserved static IP address: 192.168.39.206
	I1031 17:55:40.070396  262782 main.go:141] libmachine: (multinode-441410) Waiting for SSH to be available...
	I1031 17:55:40.073157  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073624  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.073659  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.073794  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH client type: external
	I1031 17:55:40.073821  262782 main.go:141] libmachine: (multinode-441410) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa (-rw-------)
	I1031 17:55:40.073857  262782 main.go:141] libmachine: (multinode-441410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:55:40.073874  262782 main.go:141] libmachine: (multinode-441410) DBG | About to run SSH command:
	I1031 17:55:40.073891  262782 main.go:141] libmachine: (multinode-441410) DBG | exit 0
	I1031 17:55:40.165968  262782 main.go:141] libmachine: (multinode-441410) DBG | SSH cmd err, output: <nil>: 
	I1031 17:55:40.166287  262782 main.go:141] libmachine: (multinode-441410) KVM machine creation complete!
	I1031 17:55:40.166650  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:40.167202  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167424  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.167685  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:55:40.167701  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:55:40.169353  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:55:40.169374  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:55:40.169385  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:55:40.169398  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.172135  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172606  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.172637  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.172779  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.173053  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173213  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.173363  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.173538  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.174029  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.174071  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:55:40.289219  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.289243  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:55:40.289252  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.292457  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.292941  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.292982  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.293211  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.293421  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293574  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.293716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.293877  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.294216  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.294230  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:55:40.414670  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:55:40.414814  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:55:40.414839  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:55:40.414853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415137  262782 buildroot.go:166] provisioning hostname "multinode-441410"
	I1031 17:55:40.415162  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.415361  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.417958  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418259  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.418289  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.418408  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.418600  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418756  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.418924  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.419130  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.419464  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.419483  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410 && echo "multinode-441410" | sudo tee /etc/hostname
	I1031 17:55:40.546610  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410
	
	I1031 17:55:40.546645  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.549510  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.549861  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.549899  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.550028  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.550263  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550434  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.550567  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.550727  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.551064  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.551088  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:55:40.677922  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:55:40.677950  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:55:40.678007  262782 buildroot.go:174] setting up certificates
	I1031 17:55:40.678021  262782 provision.go:83] configureAuth start
	I1031 17:55:40.678054  262782 main.go:141] libmachine: (multinode-441410) Calling .GetMachineName
	I1031 17:55:40.678362  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:40.681066  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681425  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.681463  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.681592  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.684040  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684364  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.684398  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.684529  262782 provision.go:138] copyHostCerts
	I1031 17:55:40.684585  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684621  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:55:40.684638  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:55:40.684693  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:55:40.684774  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684791  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:55:40.684798  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:55:40.684834  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:55:40.684879  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684897  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:55:40.684904  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:55:40.684923  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:55:40.684968  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410 san=[192.168.39.206 192.168.39.206 localhost 127.0.0.1 minikube multinode-441410]
	I1031 17:55:40.801336  262782 provision.go:172] copyRemoteCerts
	I1031 17:55:40.801411  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:55:40.801439  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.804589  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805040  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.805075  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.805300  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.805513  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.805703  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.805957  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:40.895697  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:55:40.895816  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 17:55:40.918974  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:55:40.919053  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:55:40.941084  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:55:40.941158  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1031 17:55:40.963360  262782 provision.go:86] duration metric: configureAuth took 285.323582ms
	I1031 17:55:40.963391  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:55:40.963590  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:55:40.963617  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:40.963943  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:40.967158  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967533  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:40.967567  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:40.967748  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:40.967975  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:40.968250  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:40.968438  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:40.968756  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:40.968769  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:55:41.087693  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:55:41.087731  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:55:41.087886  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:55:41.087930  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.091022  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091330  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.091362  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.091636  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.091849  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092005  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.092130  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.092396  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.092748  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.092819  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:55:41.222685  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:55:41.222793  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:41.225314  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225688  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:41.225721  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:41.225991  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:41.226196  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226358  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:41.226571  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:41.226715  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:41.227028  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:41.227046  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:55:42.044149  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:55:42.044190  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:55:42.044205  262782 main.go:141] libmachine: (multinode-441410) Calling .GetURL
	I1031 17:55:42.045604  262782 main.go:141] libmachine: (multinode-441410) DBG | Using libvirt version 6000000
	I1031 17:55:42.047874  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048274  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.048311  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.048465  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:55:42.048481  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:55:42.048488  262782 client.go:171] LocalClient.Create took 22.614208034s
	I1031 17:55:42.048515  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 22.614298533s
	I1031 17:55:42.048529  262782 start.go:300] post-start starting for "multinode-441410" (driver="kvm2")
	I1031 17:55:42.048545  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:55:42.048568  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.048825  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:55:42.048850  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.051154  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051490  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.051522  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.051670  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.051896  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.052060  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.052222  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.139365  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:55:42.143386  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:55:42.143416  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:55:42.143423  262782 command_runner.go:130] > ID=buildroot
	I1031 17:55:42.143431  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:55:42.143439  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:55:42.143517  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:55:42.143544  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:55:42.143626  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:55:42.143717  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:55:42.143739  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:55:42.143844  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:55:42.152251  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:42.175053  262782 start.go:303] post-start completed in 126.502146ms
	I1031 17:55:42.175115  262782 main.go:141] libmachine: (multinode-441410) Calling .GetConfigRaw
	I1031 17:55:42.175759  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.178273  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178674  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.178710  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.178967  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:55:42.179162  262782 start.go:128] duration metric: createHost completed in 22.763933262s
	I1031 17:55:42.179188  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.181577  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.181893  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.181922  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.182088  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.182276  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182423  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.182585  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.182780  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:55:42.183103  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1031 17:55:42.183115  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:55:42.302764  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698774942.272150082
	
	I1031 17:55:42.302792  262782 fix.go:206] guest clock: 1698774942.272150082
	I1031 17:55:42.302806  262782 fix.go:219] Guest: 2023-10-31 17:55:42.272150082 +0000 UTC Remote: 2023-10-31 17:55:42.179175821 +0000 UTC m=+22.901038970 (delta=92.974261ms)
	I1031 17:55:42.302833  262782 fix.go:190] guest clock delta is within tolerance: 92.974261ms
	I1031 17:55:42.302839  262782 start.go:83] releasing machines lock for "multinode-441410", held for 22.887729904s
	I1031 17:55:42.302867  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.303166  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:42.306076  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306458  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.306488  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.306676  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307206  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307399  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:55:42.307489  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:55:42.307531  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.307594  262782 ssh_runner.go:195] Run: cat /version.json
	I1031 17:55:42.307623  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:55:42.310225  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310502  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310538  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310598  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.310696  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.310863  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.310959  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:42.310992  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:42.311042  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311126  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:55:42.311202  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.311382  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:55:42.311546  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:55:42.311673  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:55:42.394439  262782 command_runner.go:130] > {"iso_version": "v1.32.0", "kicbase_version": "v0.0.40-1698167243-17466", "minikube_version": "v1.32.0-beta.0", "commit": "826a5f4ecfc9c21a72522a8343b4079f2e26b26e"}
	I1031 17:55:42.394908  262782 ssh_runner.go:195] Run: systemctl --version
	I1031 17:55:42.452613  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1031 17:55:42.453327  262782 command_runner.go:130] > systemd 247 (247)
	I1031 17:55:42.453352  262782 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1031 17:55:42.453425  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:55:42.458884  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1031 17:55:42.458998  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:55:42.459070  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:55:42.473287  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:55:42.473357  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:55:42.473370  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.473502  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.493268  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:55:42.493374  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:55:42.503251  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:55:42.513088  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:55:42.513164  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:55:42.522949  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.532741  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:55:42.542451  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:55:42.552637  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:55:42.562528  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:55:42.572212  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:55:42.580618  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:55:42.580701  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:55:42.589366  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:42.695731  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:55:42.713785  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:55:42.713889  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:55:42.726262  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:55:42.727076  262782 command_runner.go:130] > [Unit]
	I1031 17:55:42.727098  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:55:42.727108  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:55:42.727118  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:55:42.727127  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:55:42.727133  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:55:42.727138  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:55:42.727141  262782 command_runner.go:130] > [Service]
	I1031 17:55:42.727146  262782 command_runner.go:130] > Type=notify
	I1031 17:55:42.727153  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:55:42.727160  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:55:42.727174  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:55:42.727189  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:55:42.727204  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:55:42.727217  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:55:42.727224  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:55:42.727232  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:55:42.727243  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:55:42.727253  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:55:42.727259  262782 command_runner.go:130] > ExecStart=
	I1031 17:55:42.727289  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:55:42.727304  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:55:42.727315  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:55:42.727329  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:55:42.727340  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:55:42.727351  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:55:42.727361  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:55:42.727375  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:55:42.727387  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:55:42.727394  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:55:42.727404  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:55:42.727415  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:55:42.727426  262782 command_runner.go:130] > Delegate=yes
	I1031 17:55:42.727446  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:55:42.727456  262782 command_runner.go:130] > KillMode=process
	I1031 17:55:42.727462  262782 command_runner.go:130] > [Install]
	I1031 17:55:42.727478  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:55:42.727556  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.742533  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:55:42.763661  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:55:42.776184  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.788281  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:55:42.819463  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:55:42.831989  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:55:42.848534  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:55:42.848778  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:55:42.852296  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:55:42.852426  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:55:42.861006  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:55:42.876798  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:55:42.982912  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:55:43.083895  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:55:43.084055  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:55:43.100594  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:43.199621  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:44.590395  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390727747s)
	I1031 17:55:44.590461  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.709964  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:55:44.823771  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:55:44.930613  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.044006  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:55:45.059765  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:45.173339  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:55:45.248477  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:55:45.248549  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:55:45.254167  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:55:45.254191  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:55:45.254197  262782 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I1031 17:55:45.254204  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:55:45.254212  262782 command_runner.go:130] > Access: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254217  262782 command_runner.go:130] > Modify: 2023-10-31 17:55:45.158308568 +0000
	I1031 17:55:45.254222  262782 command_runner.go:130] > Change: 2023-10-31 17:55:45.161313088 +0000
	I1031 17:55:45.254227  262782 command_runner.go:130] >  Birth: -
	I1031 17:55:45.254493  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:55:45.254544  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:55:45.258520  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:55:45.258923  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:55:45.307623  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:55:45.307647  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:55:45.307659  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:55:45.307664  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:55:45.309086  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:55:45.309154  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.336941  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.337102  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:55:45.363904  262782 command_runner.go:130] > 24.0.6
	I1031 17:55:45.366711  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:55:45.366768  262782 main.go:141] libmachine: (multinode-441410) Calling .GetIP
	I1031 17:55:45.369326  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369676  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:55:45.369709  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:55:45.369870  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:55:45.373925  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:45.386904  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:55:45.386972  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:45.404415  262782 docker.go:699] Got preloaded images: 
	I1031 17:55:45.404452  262782 docker.go:705] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1031 17:55:45.404507  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:45.412676  262782 command_runner.go:139] > {"Repositories":{}}
	I1031 17:55:45.412812  262782 ssh_runner.go:195] Run: which lz4
	I1031 17:55:45.416227  262782 command_runner.go:130] > /usr/bin/lz4
	I1031 17:55:45.416400  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1031 17:55:45.416500  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 17:55:45.420081  262782 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420121  262782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 17:55:45.420138  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1031 17:55:46.913961  262782 docker.go:663] Took 1.497490 seconds to copy over tarball
	I1031 17:55:46.914071  262782 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 17:55:49.329206  262782 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415093033s)
	I1031 17:55:49.329241  262782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 17:55:49.366441  262782 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1031 17:55:49.376335  262782 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.3":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.3":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.3":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f50
57b98c46fcefdf"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.3":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1031 17:55:49.376538  262782 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1031 17:55:49.391874  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:55:49.500414  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:55:53.692136  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.191674862s)
	I1031 17:55:53.692233  262782 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 17:55:53.711627  262782 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1031 17:55:53.711652  262782 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1031 17:55:53.711659  262782 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1031 17:55:53.711668  262782 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1031 17:55:53.711676  262782 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1031 17:55:53.711683  262782 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1031 17:55:53.711697  262782 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1031 17:55:53.711706  262782 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:55:53.711782  262782 docker.go:699] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 17:55:53.711806  262782 cache_images.go:84] Images are preloaded, skipping loading
	I1031 17:55:53.711883  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:55:53.740421  262782 command_runner.go:130] > cgroupfs
	I1031 17:55:53.740792  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:55:53.740825  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:55:53.740859  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:55:53.740895  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:55:53.741084  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:55:53.741177  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:55:53.741255  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:55:53.750285  262782 command_runner.go:130] > kubeadm
	I1031 17:55:53.750313  262782 command_runner.go:130] > kubectl
	I1031 17:55:53.750320  262782 command_runner.go:130] > kubelet
	I1031 17:55:53.750346  262782 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 17:55:53.750419  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 17:55:53.759486  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1031 17:55:53.774226  262782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 17:55:53.788939  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1031 17:55:53.803942  262782 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1031 17:55:53.807376  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:55:53.818173  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.206
	I1031 17:55:53.818219  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:53.818480  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:55:53.818537  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:55:53.818583  262782 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key
	I1031 17:55:53.818597  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt with IP's: []
	I1031 17:55:54.061185  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt ...
	I1031 17:55:54.061218  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt: {Name:mk284a8b72ddb8501d1ac0de2efd8648580727ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061410  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key ...
	I1031 17:55:54.061421  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key: {Name:mkb1aa147b5241c87f7abf5da271aec87929577f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.061497  262782 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c
	I1031 17:55:54.061511  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c with IP's: [192.168.39.206 10.96.0.1 127.0.0.1 10.0.0.1]
	I1031 17:55:54.182000  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c ...
	I1031 17:55:54.182045  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c: {Name:mka38bf70770f4cf0ce783993768b6eb76ec9999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182223  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c ...
	I1031 17:55:54.182236  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c: {Name:mk5372c72c876c14b22a095e3af7651c8be7b17b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.182310  262782 certs.go:337] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt
	I1031 17:55:54.182380  262782 certs.go:341] copying /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key.b548e89c -> /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key
	I1031 17:55:54.182432  262782 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key
	I1031 17:55:54.182446  262782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt with IP's: []
	I1031 17:55:54.414562  262782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt ...
	I1031 17:55:54.414599  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt: {Name:mk84bf718660ce0c658a2fcf223743aa789d6fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414767  262782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key ...
	I1031 17:55:54.414778  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key: {Name:mk01f7180484a1490c7dd39d1cd242d6c20cb972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:55:54.414916  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1031 17:55:54.414935  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1031 17:55:54.414945  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1031 17:55:54.414957  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1031 17:55:54.414969  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:55:54.414982  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:55:54.414994  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:55:54.415007  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:55:54.415053  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:55:54.415086  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:55:54.415097  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:55:54.415119  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:55:54.415143  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:55:54.415164  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:55:54.415205  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:55:54.415240  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.415253  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.415265  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.415782  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 17:55:54.437836  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 17:55:54.458014  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 17:55:54.478381  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 17:55:54.502178  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:55:54.524456  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:55:54.545501  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:55:54.566026  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:55:54.586833  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:55:54.606979  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:55:54.627679  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:55:54.648719  262782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 17:55:54.663657  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:55:54.668342  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:55:54.668639  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:55:54.678710  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683132  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683170  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.683216  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:55:54.688787  262782 command_runner.go:130] > b5213941
	I1031 17:55:54.688851  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:55:54.698497  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:55:54.708228  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712358  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712425  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.712486  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:55:54.717851  262782 command_runner.go:130] > 51391683
	I1031 17:55:54.718054  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:55:54.728090  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:55:54.737860  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.741983  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742014  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.742077  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:55:54.747329  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:55:54.747568  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:55:54.757960  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:55:54.762106  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762156  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:55:54.762200  262782 kubeadm.go:404] StartCluster: {Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:55:54.762325  262782 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 17:55:54.779382  262782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 17:55:54.788545  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1031 17:55:54.788569  262782 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1031 17:55:54.788576  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1031 17:55:54.788668  262782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 17:55:54.797682  262782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 17:55:54.806403  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1031 17:55:54.806436  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1031 17:55:54.806450  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1031 17:55:54.806468  262782 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806517  262782 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 17:55:54.806564  262782 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1031 17:55:55.188341  262782 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:55:55.188403  262782 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1031 17:56:06.674737  262782 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674768  262782 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1031 17:56:06.674822  262782 kubeadm.go:322] [preflight] Running pre-flight checks
	I1031 17:56:06.674829  262782 command_runner.go:130] > [preflight] Running pre-flight checks
	I1031 17:56:06.674920  262782 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.674932  262782 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1031 17:56:06.675048  262782 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675061  262782 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1031 17:56:06.675182  262782 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675192  262782 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1031 17:56:06.675269  262782 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677413  262782 out.go:204]   - Generating certificates and keys ...
	I1031 17:56:06.675365  262782 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1031 17:56:06.677514  262782 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1031 17:56:06.677528  262782 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1031 17:56:06.677634  262782 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677656  262782 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1031 17:56:06.677744  262782 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677758  262782 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1031 17:56:06.677823  262782 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677833  262782 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1031 17:56:06.677936  262782 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.677954  262782 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1031 17:56:06.678021  262782 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678049  262782 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1031 17:56:06.678127  262782 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678137  262782 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1031 17:56:06.678292  262782 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678305  262782 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678400  262782 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678411  262782 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1031 17:56:06.678595  262782 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678609  262782 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-441410] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1031 17:56:06.678701  262782 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678712  262782 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1031 17:56:06.678793  262782 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678802  262782 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1031 17:56:06.678860  262782 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1031 17:56:06.678871  262782 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1031 17:56:06.678936  262782 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678942  262782 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1031 17:56:06.678984  262782 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.678992  262782 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1031 17:56:06.679084  262782 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679102  262782 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1031 17:56:06.679185  262782 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679195  262782 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1031 17:56:06.679260  262782 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679268  262782 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1031 17:56:06.679342  262782 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679349  262782 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1031 17:56:06.679417  262782 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.679431  262782 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1031 17:56:06.681286  262782 out.go:204]   - Booting up control plane ...
	I1031 17:56:06.681398  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681410  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1031 17:56:06.681506  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681516  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1031 17:56:06.681594  262782 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681603  262782 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1031 17:56:06.681746  262782 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681756  262782 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1031 17:56:06.681864  262782 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681882  262782 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1031 17:56:06.681937  262782 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1031 17:56:06.681947  262782 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1031 17:56:06.682147  262782 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682162  262782 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1031 17:56:06.682272  262782 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682284  262782 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003361 seconds
	I1031 17:56:06.682392  262782 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682408  262782 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1031 17:56:06.682506  262782 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682513  262782 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1031 17:56:06.682558  262782 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682564  262782 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1031 17:56:06.682748  262782 command_runner.go:130] > [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682756  262782 kubeadm.go:322] [mark-control-plane] Marking the node multinode-441410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1031 17:56:06.682804  262782 command_runner.go:130] > [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.682810  262782 kubeadm.go:322] [bootstrap-token] Using token: 4ew4ey.86ff3t11s91jtycv
	I1031 17:56:06.685457  262782 out.go:204]   - Configuring RBAC rules ...
	I1031 17:56:06.685573  262782 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685590  262782 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1031 17:56:06.685716  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685726  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1031 17:56:06.685879  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.685890  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1031 17:56:06.686064  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686074  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1031 17:56:06.686185  262782 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686193  262782 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1031 17:56:06.686308  262782 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686318  262782 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1031 17:56:06.686473  262782 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686484  262782 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1031 17:56:06.686541  262782 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686549  262782 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1031 17:56:06.686623  262782 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686642  262782 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1031 17:56:06.686658  262782 kubeadm.go:322] 
	I1031 17:56:06.686740  262782 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686749  262782 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1031 17:56:06.686756  262782 kubeadm.go:322] 
	I1031 17:56:06.686858  262782 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686867  262782 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1031 17:56:06.686873  262782 kubeadm.go:322] 
	I1031 17:56:06.686903  262782 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1031 17:56:06.686915  262782 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1031 17:56:06.687003  262782 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687013  262782 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1031 17:56:06.687080  262782 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687094  262782 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1031 17:56:06.687106  262782 kubeadm.go:322] 
	I1031 17:56:06.687178  262782 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687191  262782 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1031 17:56:06.687205  262782 kubeadm.go:322] 
	I1031 17:56:06.687294  262782 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687309  262782 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1031 17:56:06.687325  262782 kubeadm.go:322] 
	I1031 17:56:06.687395  262782 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1031 17:56:06.687404  262782 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1031 17:56:06.687504  262782 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687514  262782 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1031 17:56:06.687593  262782 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687602  262782 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1031 17:56:06.687609  262782 kubeadm.go:322] 
	I1031 17:56:06.687728  262782 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687745  262782 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1031 17:56:06.687836  262782 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1031 17:56:06.687846  262782 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1031 17:56:06.687855  262782 kubeadm.go:322] 
	I1031 17:56:06.687969  262782 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.687979  262782 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688089  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688100  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 \
	I1031 17:56:06.688133  262782 command_runner.go:130] > 	--control-plane 
	I1031 17:56:06.688142  262782 kubeadm.go:322] 	--control-plane 
	I1031 17:56:06.688150  262782 kubeadm.go:322] 
	I1031 17:56:06.688261  262782 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688270  262782 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1031 17:56:06.688277  262782 kubeadm.go:322] 
	I1031 17:56:06.688376  262782 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688386  262782 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4ew4ey.86ff3t11s91jtycv \
	I1031 17:56:06.688522  262782 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688542  262782 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a40a95b0d54f961e81ee3b396bfaa697fa93fa543c07de94ffb5599a1c53b119 
	I1031 17:56:06.688557  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:56:06.688567  262782 cni.go:136] 1 nodes found, recommending kindnet
	I1031 17:56:06.690284  262782 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1031 17:56:06.691575  262782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1031 17:56:06.699721  262782 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1031 17:56:06.699744  262782 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1031 17:56:06.699751  262782 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1031 17:56:06.699758  262782 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1031 17:56:06.699771  262782 command_runner.go:130] > Access: 2023-10-31 17:55:32.181252458 +0000
	I1031 17:56:06.699777  262782 command_runner.go:130] > Modify: 2023-10-27 02:09:29.000000000 +0000
	I1031 17:56:06.699781  262782 command_runner.go:130] > Change: 2023-10-31 17:55:30.407252458 +0000
	I1031 17:56:06.699785  262782 command_runner.go:130] >  Birth: -
	I1031 17:56:06.700087  262782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1031 17:56:06.700110  262782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1031 17:56:06.736061  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1031 17:56:07.869761  262782 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.877013  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1031 17:56:07.885373  262782 command_runner.go:130] > serviceaccount/kindnet created
	I1031 17:56:07.912225  262782 command_runner.go:130] > daemonset.apps/kindnet created
	I1031 17:56:07.915048  262782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.178939625s)
	I1031 17:56:07.915101  262782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 17:56:07.915208  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:07.915216  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45 minikube.k8s.io/name=multinode-441410 minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.156170  262782 command_runner.go:130] > node/multinode-441410 labeled
	I1031 17:56:08.163333  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1031 17:56:08.163430  262782 command_runner.go:130] > -16
	I1031 17:56:08.163456  262782 ops.go:34] apiserver oom_adj: -16
	I1031 17:56:08.163475  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.283799  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.283917  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.377454  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:08.878301  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:08.979804  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.378548  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.478241  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:09.877801  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:09.979764  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.377956  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.471511  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:10.878071  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:10.988718  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.378377  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.476309  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:11.877910  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:11.979867  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.378480  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.487401  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:12.878334  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:12.977526  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.378058  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.464953  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:13.878582  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:13.959833  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.378610  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.472951  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:14.878094  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:14.974738  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.378397  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.544477  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:15.877984  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:15.977685  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.378382  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:16.490687  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:16.878562  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.000414  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.377806  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:17.475937  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:17.878633  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.013599  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.377647  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:18.519307  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:18.877849  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.126007  262782 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1031 17:56:19.378544  262782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1031 17:56:19.572108  262782 command_runner.go:130] > NAME      SECRETS   AGE
	I1031 17:56:19.572137  262782 command_runner.go:130] > default   0         0s
	I1031 17:56:19.575581  262782 kubeadm.go:1081] duration metric: took 11.660457781s to wait for elevateKubeSystemPrivileges.
	I1031 17:56:19.575609  262782 kubeadm.go:406] StartCluster complete in 24.813413549s
	I1031 17:56:19.575630  262782 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.575715  262782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.576350  262782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:56:19.576606  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 17:56:19.576718  262782 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 17:56:19.576824  262782 addons.go:69] Setting storage-provisioner=true in profile "multinode-441410"
	I1031 17:56:19.576852  262782 addons.go:231] Setting addon storage-provisioner=true in "multinode-441410"
	I1031 17:56:19.576860  262782 addons.go:69] Setting default-storageclass=true in profile "multinode-441410"
	I1031 17:56:19.576888  262782 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-441410"
	I1031 17:56:19.576905  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:19.576929  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.576962  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.577200  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.577369  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577406  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577437  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.577479  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.577974  262782 cert_rotation.go:137] Starting client certificate rotation controller
	I1031 17:56:19.578313  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.578334  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.578346  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.578356  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.591250  262782 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1031 17:56:19.591278  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.591289  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.591296  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.591304  262782 round_trippers.go:580]     Audit-Id: 6885baa3-69e3-4348-9d34-ce64b64dd914
	I1031 17:56:19.591312  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.591337  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.591352  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.591360  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.591404  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592007  262782 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"387","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.592083  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.592094  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.592105  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.592115  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:19.592125  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.593071  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1031 17:56:19.593091  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1031 17:56:19.593497  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593620  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.593978  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594006  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594185  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.594205  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.594353  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594579  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.594743  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.594963  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.595009  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.597224  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.597454  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.597727  262782 addons.go:231] Setting addon default-storageclass=true in "multinode-441410"
	I1031 17:56:19.597759  262782 host.go:66] Checking if "multinode-441410" exists ...
	I1031 17:56:19.598123  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.598164  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.611625  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I1031 17:56:19.612151  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.612316  262782 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1031 17:56:19.612332  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.612343  262782 round_trippers.go:580]     Audit-Id: 7721df4e-2d96-45e0-aa5d-34bed664d93e
	I1031 17:56:19.612352  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.612361  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.612375  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.612387  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.612398  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.612410  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.612526  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.612708  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1031 17:56:19.612723  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.612734  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.612742  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.612962  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.612988  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.613391  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1031 17:56:19.613446  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.613716  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.613837  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.614317  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.614340  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.614935  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.615588  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:19.615609  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.615659  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:19.618068  262782 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 17:56:19.619943  262782 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.619961  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 17:56:19.619983  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.621573  262782 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1031 17:56:19.621598  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.621607  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.621616  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.621624  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.621632  262782 round_trippers.go:580]     Content-Length: 291
	I1031 17:56:19.621639  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.621648  262782 round_trippers.go:580]     Audit-Id: f7c98865-24d1-49d1-a253-642f0c1e1843
	I1031 17:56:19.621656  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.621858  262782 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"adccfd68-8818-42ab-a3eb-27552e8e01fd","resourceVersion":"388","creationTimestamp":"2023-10-31T17:56:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1031 17:56:19.622000  262782 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-441410" context rescaled to 1 replicas
	I1031 17:56:19.622076  262782 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 17:56:19.623972  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.623997  262782 out.go:177] * Verifying Kubernetes components...
	I1031 17:56:19.623262  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.625902  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:19.624190  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.625920  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.626004  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.626225  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.626419  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.631723  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I1031 17:56:19.632166  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:19.632589  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:19.632605  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:19.632914  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:19.633144  262782 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 17:56:19.634927  262782 main.go:141] libmachine: (multinode-441410) Calling .DriverName
	I1031 17:56:19.635223  262782 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:19.635243  262782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 17:56:19.635266  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHHostname
	I1031 17:56:19.638266  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638672  262782 main.go:141] libmachine: (multinode-441410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:db:aa", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:55:34 +0000 UTC Type:0 Mac:52:54:00:74:db:aa Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-441410 Clientid:01:52:54:00:74:db:aa}
	I1031 17:56:19.638718  262782 main.go:141] libmachine: (multinode-441410) DBG | domain multinode-441410 has defined IP address 192.168.39.206 and MAC address 52:54:00:74:db:aa in network mk-multinode-441410
	I1031 17:56:19.638853  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHPort
	I1031 17:56:19.639057  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHKeyPath
	I1031 17:56:19.639235  262782 main.go:141] libmachine: (multinode-441410) Calling .GetSSHUsername
	I1031 17:56:19.639375  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410/id_rsa Username:docker}
	I1031 17:56:19.888826  262782 command_runner.go:130] > apiVersion: v1
	I1031 17:56:19.888858  262782 command_runner.go:130] > data:
	I1031 17:56:19.888889  262782 command_runner.go:130] >   Corefile: |
	I1031 17:56:19.888906  262782 command_runner.go:130] >     .:53 {
	I1031 17:56:19.888913  262782 command_runner.go:130] >         errors
	I1031 17:56:19.888920  262782 command_runner.go:130] >         health {
	I1031 17:56:19.888926  262782 command_runner.go:130] >            lameduck 5s
	I1031 17:56:19.888942  262782 command_runner.go:130] >         }
	I1031 17:56:19.888948  262782 command_runner.go:130] >         ready
	I1031 17:56:19.888966  262782 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1031 17:56:19.888973  262782 command_runner.go:130] >            pods insecure
	I1031 17:56:19.888982  262782 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1031 17:56:19.888990  262782 command_runner.go:130] >            ttl 30
	I1031 17:56:19.888996  262782 command_runner.go:130] >         }
	I1031 17:56:19.889003  262782 command_runner.go:130] >         prometheus :9153
	I1031 17:56:19.889011  262782 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1031 17:56:19.889023  262782 command_runner.go:130] >            max_concurrent 1000
	I1031 17:56:19.889032  262782 command_runner.go:130] >         }
	I1031 17:56:19.889039  262782 command_runner.go:130] >         cache 30
	I1031 17:56:19.889047  262782 command_runner.go:130] >         loop
	I1031 17:56:19.889053  262782 command_runner.go:130] >         reload
	I1031 17:56:19.889060  262782 command_runner.go:130] >         loadbalance
	I1031 17:56:19.889066  262782 command_runner.go:130] >     }
	I1031 17:56:19.889076  262782 command_runner.go:130] > kind: ConfigMap
	I1031 17:56:19.889083  262782 command_runner.go:130] > metadata:
	I1031 17:56:19.889099  262782 command_runner.go:130] >   creationTimestamp: "2023-10-31T17:56:06Z"
	I1031 17:56:19.889109  262782 command_runner.go:130] >   name: coredns
	I1031 17:56:19.889116  262782 command_runner.go:130] >   namespace: kube-system
	I1031 17:56:19.889126  262782 command_runner.go:130] >   resourceVersion: "261"
	I1031 17:56:19.889135  262782 command_runner.go:130] >   uid: 0415e493-892c-402f-bd91-be065808b5ec
	I1031 17:56:19.889318  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1031 17:56:19.889578  262782 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:56:19.889833  262782 kapi.go:59] client config for multinode-441410: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.crt", KeyFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/client.key", CAFile:"/home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1c120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 17:56:19.890185  262782 node_ready.go:35] waiting up to 6m0s for node "multinode-441410" to be "Ready" ...
	I1031 17:56:19.890260  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.890269  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.890279  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.890289  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.892659  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.892677  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.892684  262782 round_trippers.go:580]     Audit-Id: b7ed5a1e-e28d-409e-84c2-423a4add0294
	I1031 17:56:19.892689  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.892694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.892699  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.892704  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.892709  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.892987  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.893559  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 17:56:19.893612  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:19.893627  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:19.893635  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:19.893642  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:19.896419  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:19.896449  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:19.896459  262782 round_trippers.go:580]     Audit-Id: dcf80b39-2107-4108-839a-08187b3e7c44
	I1031 17:56:19.896468  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:19.896477  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:19.896486  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:19.896495  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:19.896507  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:19 GMT
	I1031 17:56:19.896635  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:19.948484  262782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 17:56:20.398217  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.398242  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.398257  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.398263  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.401121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.401248  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.401287  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.401299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.401309  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.401318  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.401329  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.401335  262782 round_trippers.go:580]     Audit-Id: b8dfca08-b5c7-4eaa-9102-8e055762149f
	I1031 17:56:20.401479  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:20.788720  262782 command_runner.go:130] > configmap/coredns replaced
	I1031 17:56:20.802133  262782 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1031 17:56:20.897855  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:20.897912  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:20.897925  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:20.897942  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:20.900603  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:20.900628  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:20.900635  262782 round_trippers.go:580]     Audit-Id: e8460fbc-989f-4ca2-b4b4-43d5ba0e009b
	I1031 17:56:20.900641  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:20.900646  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:20.900651  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:20.900658  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:20.900667  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:20 GMT
	I1031 17:56:20.900856  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.120783  262782 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1031 17:56:21.120823  262782 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1031 17:56:21.120832  262782 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120840  262782 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1031 17:56:21.120845  262782 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1031 17:56:21.120853  262782 command_runner.go:130] > pod/storage-provisioner created
	I1031 17:56:21.120880  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.227295444s)
	I1031 17:56:21.120923  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.120942  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.120939  262782 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1031 17:56:21.120983  262782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.17246655s)
	I1031 17:56:21.121022  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121036  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121347  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121367  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121375  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121378  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121389  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121403  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121420  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121435  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.121455  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.121681  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.121719  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.121733  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.121866  262782 round_trippers.go:463] GET https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses
	I1031 17:56:21.121882  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.121894  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.121909  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.122102  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.122118  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.124846  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.124866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.124874  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.124881  262782 round_trippers.go:580]     Content-Length: 1273
	I1031 17:56:21.124890  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.124902  262782 round_trippers.go:580]     Audit-Id: f167eb4f-0a5a-4319-8db8-5791c73443f5
	I1031 17:56:21.124912  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.124921  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.124929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.124960  262782 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1031 17:56:21.125352  262782 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.125406  262782 round_trippers.go:463] PUT https://192.168.39.206:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1031 17:56:21.125417  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.125425  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.125431  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.125439  262782 round_trippers.go:473]     Content-Type: application/json
	I1031 17:56:21.128563  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:21.128585  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.128593  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.128602  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.128610  262782 round_trippers.go:580]     Content-Length: 1220
	I1031 17:56:21.128619  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.128631  262782 round_trippers.go:580]     Audit-Id: 052b5d55-37fa-4f64-8e68-393e70ec8253
	I1031 17:56:21.128643  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.128653  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.128715  262782 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de46fb3d-edeb-43eb-a410-43643081c798","resourceVersion":"404","creationTimestamp":"2023-10-31T17:56:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-31T17:56:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1031 17:56:21.128899  262782 main.go:141] libmachine: Making call to close driver server
	I1031 17:56:21.128915  262782 main.go:141] libmachine: (multinode-441410) Calling .Close
	I1031 17:56:21.129179  262782 main.go:141] libmachine: Successfully made call to close driver server
	I1031 17:56:21.129208  262782 main.go:141] libmachine: Making call to close connection to plugin binary
	I1031 17:56:21.129233  262782 main.go:141] libmachine: (multinode-441410) DBG | Closing plugin on server side
	I1031 17:56:21.131420  262782 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 17:56:21.132970  262782 addons.go:502] enable addons completed in 1.556259875s: enabled=[storage-provisioner default-storageclass]
	I1031 17:56:21.398005  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.398056  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.398066  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.401001  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.401037  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.401045  262782 round_trippers.go:580]     Audit-Id: 56ed004b-43c8-40be-a2b6-73002cd3b80e
	I1031 17:56:21.401052  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.401058  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.401064  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.401069  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.401074  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.401199  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.897700  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:21.897734  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:21.897743  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:21.897750  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:21.900735  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:21.900769  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:21.900779  262782 round_trippers.go:580]     Audit-Id: 18bf880f-eb4a-4a4a-9b0f-1e7afa9179f5
	I1031 17:56:21.900787  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:21.900796  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:21.900806  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:21.900815  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:21.900825  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:21 GMT
	I1031 17:56:21.900962  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:21.901302  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:22.397652  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.397684  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.397699  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.397708  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.401179  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.401218  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.401227  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.401236  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.401245  262782 round_trippers.go:580]     Audit-Id: 74307e9b-0aa4-406d-81b4-20ae711ed6ba
	I1031 17:56:22.401253  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.401264  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.401413  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:22.898179  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:22.898207  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:22.898218  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:22.898226  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:22.901313  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:22.901343  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:22.901355  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:22 GMT
	I1031 17:56:22.901364  262782 round_trippers.go:580]     Audit-Id: 3ad1b8ed-a5df-4ef6-a4b6-fbb06c75e74e
	I1031 17:56:22.901372  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:22.901380  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:22.901388  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:22.901396  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:22.901502  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.398189  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.398221  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.398233  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.398242  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.401229  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:23.401261  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.401272  262782 round_trippers.go:580]     Audit-Id: a065f182-6710-4016-bdaa-6535442b31db
	I1031 17:56:23.401281  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.401289  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.401298  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.401307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.401314  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.401433  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.898175  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:23.898205  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:23.898222  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:23.898231  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:23.901722  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:23.901745  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:23.901752  262782 round_trippers.go:580]     Audit-Id: 56214876-253a-4694-8f9c-5d674fb1c607
	I1031 17:56:23.901757  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:23.901762  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:23.901767  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:23.901773  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:23.901786  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:23 GMT
	I1031 17:56:23.901957  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:23.902397  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:24.397863  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.397896  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.397908  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.397917  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.401755  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:24.401785  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.401793  262782 round_trippers.go:580]     Audit-Id: 10784a9a-e667-4953-9e74-c589289c8031
	I1031 17:56:24.401798  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.401803  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.401813  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.401818  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.401824  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.402390  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:24.897986  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:24.898023  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:24.898057  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:24.898068  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:24.900977  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:24.901003  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:24.901012  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:24 GMT
	I1031 17:56:24.901019  262782 round_trippers.go:580]     Audit-Id: 3416d136-1d3f-4dd5-8d47-f561804ebee5
	I1031 17:56:24.901026  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:24.901033  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:24.901042  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:24.901048  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:24.901260  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.398017  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.398061  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.398073  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.398082  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.400743  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.400771  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.400781  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.400789  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.400797  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.400805  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.400814  262782 round_trippers.go:580]     Audit-Id: ab19ae0b-ae1e-4558-b056-9c010ab87b42
	I1031 17:56:25.400822  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.400985  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:25.897694  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:25.897728  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:25.897743  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:25.897751  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:25.900304  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:25.900334  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:25.900345  262782 round_trippers.go:580]     Audit-Id: 370da961-9f4a-46ec-bbb9-93fdb930eacb
	I1031 17:56:25.900354  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:25.900362  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:25.900370  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:25.900377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:25.900386  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:25 GMT
	I1031 17:56:25.900567  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.397259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.397302  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.397314  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.397323  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.400041  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:26.400066  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.400077  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.400086  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.400094  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.400101  262782 round_trippers.go:580]     Audit-Id: db53b14e-41aa-4bdd-bea4-50531bf89210
	I1031 17:56:26.400109  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.400118  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.400339  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:26.400742  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:26.897979  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:26.898011  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:26.898020  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:26.898026  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:26.912238  262782 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1031 17:56:26.912270  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:26.912282  262782 round_trippers.go:580]     Audit-Id: 9ac937db-b0d7-4d97-94fe-9bb846528042
	I1031 17:56:26.912290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:26.912299  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:26.912307  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:26.912315  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:26.912322  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:26 GMT
	I1031 17:56:26.912454  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.398165  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.398189  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.398200  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.398207  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.401228  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:27.401254  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.401264  262782 round_trippers.go:580]     Audit-Id: f4ac85f4-3369-4c9f-82f1-82efb4fd5de8
	I1031 17:56:27.401272  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.401280  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.401287  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.401294  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.401303  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.401534  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:27.897211  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:27.897239  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:27.897250  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:27.897257  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:27.900320  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:27.900350  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:27.900362  262782 round_trippers.go:580]     Audit-Id: 8eceb12f-92e3-4fd4-9fbb-1a7b1fda9c18
	I1031 17:56:27.900370  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:27.900378  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:27.900385  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:27.900393  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:27.900408  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:27 GMT
	I1031 17:56:27.900939  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.397631  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.397659  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.397672  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.397682  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.400774  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:28.400799  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.400807  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.400813  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.400818  262782 round_trippers.go:580]     Audit-Id: c8803f2d-c322-44d7-bd45-f48632adec33
	I1031 17:56:28.400823  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.400830  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.400835  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.401033  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:28.401409  262782 node_ready.go:58] node "multinode-441410" has status "Ready":"False"
	I1031 17:56:28.897617  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:28.897642  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:28.897653  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:28.897660  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:28.902175  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:28.902205  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:28.902215  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:28.902223  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:28 GMT
	I1031 17:56:28.902231  262782 round_trippers.go:580]     Audit-Id: a173406e-e980-4828-a034-9c9554913d28
	I1031 17:56:28.902238  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:28.902246  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:28.902253  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:28.902434  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.397493  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.397525  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.397538  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.397546  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.400347  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.400371  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.400378  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.400384  262782 round_trippers.go:580]     Audit-Id: f9b357fa-d73f-4c80-99d7-6b2d621cbdc2
	I1031 17:56:29.400389  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.400394  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.400399  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.400404  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.400583  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:29.897860  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:29.897888  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:29.897900  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:29.897906  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:29.900604  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:29.900630  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:29.900636  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:29 GMT
	I1031 17:56:29.900641  262782 round_trippers.go:580]     Audit-Id: d3fd2d34-2e6f-415c-ac56-cf7ccf92ba3a
	I1031 17:56:29.900646  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:29.900663  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:29.900668  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:29.900673  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:29.900880  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"344","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4935 chars]
	I1031 17:56:30.397565  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.397590  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.397599  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.397605  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.405509  262782 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1031 17:56:30.405535  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.405542  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.405548  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.405553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.405558  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.405563  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.405568  262782 round_trippers.go:580]     Audit-Id: 62aa1c85-a1ac-4951-84b7-7dc0462636ce
	I1031 17:56:30.408600  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.408902  262782 node_ready.go:49] node "multinode-441410" has status "Ready":"True"
	I1031 17:56:30.408916  262782 node_ready.go:38] duration metric: took 10.518710789s waiting for node "multinode-441410" to be "Ready" ...
	I1031 17:56:30.408926  262782 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:30.408989  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:30.409009  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.409016  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.409022  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.415274  262782 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1031 17:56:30.415298  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.415306  262782 round_trippers.go:580]     Audit-Id: e876f932-cc7b-4e46-83ba-19124569b98f
	I1031 17:56:30.415311  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.415316  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.415321  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.415327  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.415336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.416844  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I1031 17:56:30.419752  262782 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:30.419841  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.419846  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.419854  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.419861  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.424162  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.424191  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.424200  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.424208  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.424215  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.424222  262782 round_trippers.go:580]     Audit-Id: efa63093-f26c-4522-9235-152008a08b2d
	I1031 17:56:30.424230  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.424238  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.430413  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.430929  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.430944  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.430952  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.430960  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.436768  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.436796  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.436803  262782 round_trippers.go:580]     Audit-Id: 25de4d8d-720e-4845-93a4-f6fac8c06716
	I1031 17:56:30.436809  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.436814  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.436819  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.436824  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.436829  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.437894  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.438248  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.438262  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.438269  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.438274  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.443895  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.443917  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.443924  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.443929  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.443934  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.443939  262782 round_trippers.go:580]     Audit-Id: 0f1d1fbe-c670-4d8f-9099-2277c418f70d
	I1031 17:56:30.443944  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.443950  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.444652  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.445254  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.445279  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.445289  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.445298  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.450829  262782 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1031 17:56:30.450851  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.450857  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.450863  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.450868  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.450873  262782 round_trippers.go:580]     Audit-Id: cf146bdc-539d-4cc8-8a90-4322611e31e3
	I1031 17:56:30.450878  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.450885  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.451504  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:30.952431  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:30.952464  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.952472  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.952478  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.955870  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:30.955918  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.955927  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.955933  262782 round_trippers.go:580]     Audit-Id: 5a97492e-4851-478a-b56a-0ff92f8c3283
	I1031 17:56:30.955938  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.955944  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.955949  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.955955  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.956063  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:30.956507  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:30.956519  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:30.956526  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:30.956532  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:30.960669  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:30.960696  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:30.960707  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:30 GMT
	I1031 17:56:30.960716  262782 round_trippers.go:580]     Audit-Id: c3b57e65-e912-4e1f-801e-48e843be4981
	I1031 17:56:30.960724  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:30.960732  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:30.960741  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:30.960749  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:30.960898  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.452489  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.452516  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.452530  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.452536  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.455913  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.455949  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.455959  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.455968  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.455977  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.455986  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.455995  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.456007  262782 round_trippers.go:580]     Audit-Id: 803a6ca4-73cc-466f-8a28-ded7529f1eab
	I1031 17:56:31.456210  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.456849  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.456875  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.456886  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.456895  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.459863  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.459892  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.459903  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.459912  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.459921  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.459930  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.459938  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.459947  262782 round_trippers.go:580]     Audit-Id: 7345bb0d-3e2d-4be2-a718-665c409d3cc4
	I1031 17:56:31.460108  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:31.952754  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:31.952780  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.952789  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.952795  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.956091  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:31.956114  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.956122  262782 round_trippers.go:580]     Audit-Id: 46b06260-451c-4f0c-8146-083b357573d9
	I1031 17:56:31.956127  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.956132  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.956137  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.956144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.956149  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.956469  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:31.956984  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:31.957002  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:31.957010  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:31.957015  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:31.959263  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:31.959279  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:31.959285  262782 round_trippers.go:580]     Audit-Id: 88092291-7cf6-4d41-aa7b-355d964a3f3e
	I1031 17:56:31.959290  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:31.959302  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:31.959312  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:31.959328  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:31.959336  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:31 GMT
	I1031 17:56:31.959645  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.452325  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.452353  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.452361  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.452367  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.456328  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.456354  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.456363  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.456371  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.456379  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.456386  262782 round_trippers.go:580]     Audit-Id: 18ebe92d-11e9-4e52-82a1-8a35fbe20ad9
	I1031 17:56:32.456393  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.456400  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.456801  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"434","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1031 17:56:32.457274  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.457289  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.457299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.457308  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.459434  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.459456  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.459466  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.459475  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.459486  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.459495  262782 round_trippers.go:580]     Audit-Id: 99747f2a-1e6c-4985-8b50-9b99676ddac8
	I1031 17:56:32.459503  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.459515  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.459798  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.460194  262782 pod_ready.go:102] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"False"
	I1031 17:56:32.952501  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lwggp
	I1031 17:56:32.952533  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.952543  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.952551  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.955750  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:32.955776  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.955786  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.955795  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.955804  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.955812  262782 round_trippers.go:580]     Audit-Id: 25877d49-35b9-4feb-8529-7573d2bc7d5c
	I1031 17:56:32.955818  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.955823  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.956346  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I1031 17:56:32.956810  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.956823  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.956834  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.956843  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.959121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.959148  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.959155  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.959161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.959166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.959171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.959177  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.959182  262782 round_trippers.go:580]     Audit-Id: fdf3ede0-0a5f-4c8b-958d-cd09542351ab
	I1031 17:56:32.959351  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.959716  262782 pod_ready.go:92] pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.959735  262782 pod_ready.go:81] duration metric: took 2.539957521s waiting for pod "coredns-5dd5756b68-lwggp" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959749  262782 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.959892  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-441410
	I1031 17:56:32.959918  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.959930  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.959939  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.962113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.962137  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.962147  262782 round_trippers.go:580]     Audit-Id: de8d55ff-26c1-4424-8832-d658a86c0287
	I1031 17:56:32.962156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.962162  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.962168  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.962173  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.962178  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.962314  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-441410","namespace":"kube-system","uid":"32cdcb0c-227d-4af3-b6ee-b9d26bbfa333","resourceVersion":"419","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.206:2379","kubernetes.io/config.hash":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.mirror":"77641703b150fa80ab6f4b864674eb56","kubernetes.io/config.seen":"2023-10-31T17:56:06.697480598Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I1031 17:56:32.962842  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.962858  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.962869  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.962879  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.964975  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.964995  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.965002  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.965007  262782 round_trippers.go:580]     Audit-Id: d4b3da6f-850f-45ed-ad57-eae81644c181
	I1031 17:56:32.965012  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.965017  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.965022  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.965029  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.965140  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.965506  262782 pod_ready.go:92] pod "etcd-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.965524  262782 pod_ready.go:81] duration metric: took 5.763819ms waiting for pod "etcd-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965539  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.965607  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-441410
	I1031 17:56:32.965618  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.965627  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.965637  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.968113  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.968131  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.968137  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.968142  262782 round_trippers.go:580]     Audit-Id: 73744b16-b390-4d57-9997-f269a1fde7d6
	I1031 17:56:32.968147  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.968152  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.968157  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.968162  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.968364  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-441410","namespace":"kube-system","uid":"8b47a43e-7543-4566-a610-637c32de5614","resourceVersion":"420","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.206:8443","kubernetes.io/config.hash":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.mirror":"f4f584a5c299b8b91cb08104ddd09da0","kubernetes.io/config.seen":"2023-10-31T17:56:06.697481635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I1031 17:56:32.968770  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.968784  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.968795  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.968804  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.970795  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:32.970829  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.970836  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.970841  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.970847  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.970852  262782 round_trippers.go:580]     Audit-Id: e08c51de-8454-4703-b89c-73c8d479a150
	I1031 17:56:32.970857  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.970864  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.970981  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.971275  262782 pod_ready.go:92] pod "kube-apiserver-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.971292  262782 pod_ready.go:81] duration metric: took 5.744209ms waiting for pod "kube-apiserver-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971306  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.971376  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-441410
	I1031 17:56:32.971387  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.971399  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.971410  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.973999  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.974016  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.974022  262782 round_trippers.go:580]     Audit-Id: 0c2aa0f5-8551-4405-a61a-eb6ed245947f
	I1031 17:56:32.974027  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.974041  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.974046  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.974051  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.974059  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.974731  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-441410","namespace":"kube-system","uid":"a8d3ff28-d159-40f9-a68b-8d584c987892","resourceVersion":"418","creationTimestamp":"2023-10-31T17:56:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.mirror":"19906eeba0e0ee55d33a0ac06ed3288c","kubernetes.io/config.seen":"2023-10-31T17:55:58.517712152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I1031 17:56:32.975356  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:32.975375  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.975386  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.975428  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:32.978337  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:32.978355  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:32.978362  262782 round_trippers.go:580]     Audit-Id: 7735aec3-f9dd-4999-b7d3-3e3b63c1d821
	I1031 17:56:32.978367  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:32.978372  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:32.978377  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:32.978382  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:32.978388  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:32.978632  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:32.978920  262782 pod_ready.go:92] pod "kube-controller-manager-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:32.978938  262782 pod_ready.go:81] duration metric: took 7.622994ms waiting for pod "kube-controller-manager-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.978952  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:32.998349  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbl8r
	I1031 17:56:32.998378  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:32.998394  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:32.998403  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.001078  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.001103  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.001110  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.001116  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:32 GMT
	I1031 17:56:33.001121  262782 round_trippers.go:580]     Audit-Id: aebe9f70-9c46-4a23-9ade-371effac8515
	I1031 17:56:33.001128  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.001136  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.001144  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.001271  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tbl8r","generateName":"kube-proxy-","namespace":"kube-system","uid":"6c0f54ca-e87f-4d58-a609-41877ec4be36","resourceVersion":"414","creationTimestamp":"2023-10-31T17:56:18Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"32686e2f-4b7a-494b-8a18-a1d58f486cce","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"32686e2f-4b7a-494b-8a18-a1d58f486cce\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1031 17:56:33.198161  262782 request.go:629] Waited for 196.45796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198244  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.198252  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.198263  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.198272  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.201121  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.201143  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.201150  262782 round_trippers.go:580]     Audit-Id: 39428626-770c-4ddf-9329-f186386f38ed
	I1031 17:56:33.201156  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.201161  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.201166  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.201171  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.201175  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.201329  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.201617  262782 pod_ready.go:92] pod "kube-proxy-tbl8r" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.201632  262782 pod_ready.go:81] duration metric: took 222.672541ms waiting for pod "kube-proxy-tbl8r" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.201642  262782 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.398184  262782 request.go:629] Waited for 196.449917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398259  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-441410
	I1031 17:56:33.398265  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.398273  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.398291  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.401184  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.401208  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.401217  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.401226  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.401234  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.401242  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.401253  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.401259  262782 round_trippers.go:580]     Audit-Id: 1fcc7dab-75f4-4f82-a0a4-5f6beea832ef
	I1031 17:56:33.401356  262782 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-441410","namespace":"kube-system","uid":"92181f82-4199-4cd3-a89a-8d4094c64f26","resourceVersion":"335","creationTimestamp":"2023-10-31T17:56:07Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.mirror":"3e34d8a62aa11fb0511a2f36ec14f782","kubernetes.io/config.seen":"2023-10-31T17:56:06.697476593Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I1031 17:56:33.598222  262782 request.go:629] Waited for 196.401287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598286  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes/multinode-441410
	I1031 17:56:33.598291  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.598299  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.598305  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.600844  262782 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1031 17:56:33.600866  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.600879  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.600888  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.600897  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.600906  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.600913  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.600918  262782 round_trippers.go:580]     Audit-Id: 622e3fe8-bd25-4e33-ac25-26c0fdd30454
	I1031 17:56:33.601237  262782 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2023-10-31T17:56:03Z","fieldsType":"Fields [truncated 4790 chars]
	I1031 17:56:33.601536  262782 pod_ready.go:92] pod "kube-scheduler-multinode-441410" in "kube-system" namespace has status "Ready":"True"
	I1031 17:56:33.601549  262782 pod_ready.go:81] duration metric: took 399.901026ms waiting for pod "kube-scheduler-multinode-441410" in "kube-system" namespace to be "Ready" ...
	I1031 17:56:33.601560  262782 pod_ready.go:38] duration metric: took 3.192620454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 17:56:33.601580  262782 api_server.go:52] waiting for apiserver process to appear ...
	I1031 17:56:33.601626  262782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 17:56:33.614068  262782 command_runner.go:130] > 1894
	I1031 17:56:33.614461  262782 api_server.go:72] duration metric: took 13.992340777s to wait for apiserver process to appear ...
	I1031 17:56:33.614486  262782 api_server.go:88] waiting for apiserver healthz status ...
	I1031 17:56:33.614505  262782 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1031 17:56:33.620259  262782 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1031 17:56:33.620337  262782 round_trippers.go:463] GET https://192.168.39.206:8443/version
	I1031 17:56:33.620344  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.620352  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.620358  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.621387  262782 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1031 17:56:33.621407  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.621415  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.621422  262782 round_trippers.go:580]     Content-Length: 264
	I1031 17:56:33.621427  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.621432  262782 round_trippers.go:580]     Audit-Id: 640b6af3-db08-45da-8d6b-aa48f5c0ed10
	I1031 17:56:33.621438  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.621444  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.621455  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.621474  262782 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1031 17:56:33.621562  262782 api_server.go:141] control plane version: v1.28.3
	I1031 17:56:33.621579  262782 api_server.go:131] duration metric: took 7.087121ms to wait for apiserver health ...
	I1031 17:56:33.621588  262782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 17:56:33.798130  262782 request.go:629] Waited for 176.435578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798223  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:33.798231  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.798241  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.798256  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:33.802450  262782 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1031 17:56:33.802474  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:33.802484  262782 round_trippers.go:580]     Audit-Id: eee25c7b-6b31-438a-8e38-dd3287bc02a6
	I1031 17:56:33.802490  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:33.802495  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:33.802500  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:33.802505  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:33.802510  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:33.803462  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:33.805850  262782 system_pods.go:59] 8 kube-system pods found
	I1031 17:56:33.805890  262782 system_pods.go:61] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:33.805899  262782 system_pods.go:61] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:33.805906  262782 system_pods.go:61] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:33.805913  262782 system_pods.go:61] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:33.805920  262782 system_pods.go:61] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:33.805927  262782 system_pods.go:61] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:33.805936  262782 system_pods.go:61] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:33.805943  262782 system_pods.go:61] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:33.805954  262782 system_pods.go:74] duration metric: took 184.359632ms to wait for pod list to return data ...
	I1031 17:56:33.805968  262782 default_sa.go:34] waiting for default service account to be created ...
	I1031 17:56:33.998484  262782 request.go:629] Waited for 192.418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998555  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/default/serviceaccounts
	I1031 17:56:33.998560  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:33.998568  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:33.998575  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.001649  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.001682  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.001694  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.001701  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.001707  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.001712  262782 round_trippers.go:580]     Content-Length: 261
	I1031 17:56:34.001717  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:33 GMT
	I1031 17:56:34.001727  262782 round_trippers.go:580]     Audit-Id: 8602fc8d-9bfb-4eb5-887c-3d6ba13b0575
	I1031 17:56:34.001732  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.001761  262782 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2796f395-ca7f-49f0-a99a-583ecb946344","resourceVersion":"373","creationTimestamp":"2023-10-31T17:56:19Z"}}]}
	I1031 17:56:34.002053  262782 default_sa.go:45] found service account: "default"
	I1031 17:56:34.002077  262782 default_sa.go:55] duration metric: took 196.098944ms for default service account to be created ...
	I1031 17:56:34.002089  262782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 17:56:34.197616  262782 request.go:629] Waited for 195.368679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197712  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/namespaces/kube-system/pods
	I1031 17:56:34.197720  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.197732  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.197741  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.201487  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.201514  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.201522  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.201532  262782 round_trippers.go:580]     Audit-Id: d140750d-88b3-48a4-b946-3bbca3397f7e
	I1031 17:56:34.201537  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.201542  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.201547  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.201553  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.202224  262782 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lwggp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"13e0e515-f978-4945-abf2-8224996d04b7","resourceVersion":"447","creationTimestamp":"2023-10-31T17:56:19Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"5dafd446-39fc-44db-94aa-f72d7c4fb065","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-31T17:56:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5dafd446-39fc-44db-94aa-f72d7c4fb065\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I1031 17:56:34.203932  262782 system_pods.go:86] 8 kube-system pods found
	I1031 17:56:34.203958  262782 system_pods.go:89] "coredns-5dd5756b68-lwggp" [13e0e515-f978-4945-abf2-8224996d04b7] Running
	I1031 17:56:34.203966  262782 system_pods.go:89] "etcd-multinode-441410" [32cdcb0c-227d-4af3-b6ee-b9d26bbfa333] Running
	I1031 17:56:34.203972  262782 system_pods.go:89] "kindnet-6rrkf" [ee7915c4-6d8d-49d1-9e06-12fe2d3aad54] Running
	I1031 17:56:34.203978  262782 system_pods.go:89] "kube-apiserver-multinode-441410" [8b47a43e-7543-4566-a610-637c32de5614] Running
	I1031 17:56:34.203985  262782 system_pods.go:89] "kube-controller-manager-multinode-441410" [a8d3ff28-d159-40f9-a68b-8d584c987892] Running
	I1031 17:56:34.203990  262782 system_pods.go:89] "kube-proxy-tbl8r" [6c0f54ca-e87f-4d58-a609-41877ec4be36] Running
	I1031 17:56:34.203996  262782 system_pods.go:89] "kube-scheduler-multinode-441410" [92181f82-4199-4cd3-a89a-8d4094c64f26] Running
	I1031 17:56:34.204002  262782 system_pods.go:89] "storage-provisioner" [24199518-9184-4f82-a011-afe05284ce89] Running
	I1031 17:56:34.204012  262782 system_pods.go:126] duration metric: took 201.916856ms to wait for k8s-apps to be running ...
	I1031 17:56:34.204031  262782 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 17:56:34.204085  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 17:56:34.219046  262782 system_svc.go:56] duration metric: took 15.013064ms WaitForService to wait for kubelet.
	I1031 17:56:34.219080  262782 kubeadm.go:581] duration metric: took 14.596968131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 17:56:34.219107  262782 node_conditions.go:102] verifying NodePressure condition ...
	I1031 17:56:34.398566  262782 request.go:629] Waited for 179.364161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398639  262782 round_trippers.go:463] GET https://192.168.39.206:8443/api/v1/nodes
	I1031 17:56:34.398646  262782 round_trippers.go:469] Request Headers:
	I1031 17:56:34.398658  262782 round_trippers.go:473]     Accept: application/json, */*
	I1031 17:56:34.398666  262782 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1031 17:56:34.401782  262782 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1031 17:56:34.401804  262782 round_trippers.go:577] Response Headers:
	I1031 17:56:34.401811  262782 round_trippers.go:580]     Audit-Id: 597137e7-80bd-4d61-95ec-ed64464d9016
	I1031 17:56:34.401816  262782 round_trippers.go:580]     Cache-Control: no-cache, private
	I1031 17:56:34.401821  262782 round_trippers.go:580]     Content-Type: application/json
	I1031 17:56:34.401831  262782 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 063cdbcf-94a0-47e8-bc03-06675f244fa7
	I1031 17:56:34.401837  262782 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 49e28bce-dc3f-4cac-8156-a74525bd6cfc
	I1031 17:56:34.401842  262782 round_trippers.go:580]     Date: Tue, 31 Oct 2023 17:56:34 GMT
	I1031 17:56:34.402077  262782 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-441410","uid":"ec7c7362-9a59-4c1f-a6c7-b1f3bbe928aa","resourceVersion":"429","creationTimestamp":"2023-10-31T17:56:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-441410","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a71321dec093a6a5f401a04c4a033d482891db45","minikube.k8s.io/name":"multinode-441410","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_31T17_56_07_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4843 chars]
	I1031 17:56:34.402470  262782 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 17:56:34.402496  262782 node_conditions.go:123] node cpu capacity is 2
	I1031 17:56:34.402510  262782 node_conditions.go:105] duration metric: took 183.396121ms to run NodePressure ...
	I1031 17:56:34.402526  262782 start.go:228] waiting for startup goroutines ...
	I1031 17:56:34.402540  262782 start.go:233] waiting for cluster config update ...
	I1031 17:56:34.402551  262782 start.go:242] writing updated cluster config ...
	I1031 17:56:34.404916  262782 out.go:177] 
	I1031 17:56:34.406657  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:34.406738  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.408765  262782 out.go:177] * Starting worker node multinode-441410-m02 in cluster multinode-441410
	I1031 17:56:34.410228  262782 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:56:34.410258  262782 cache.go:56] Caching tarball of preloaded images
	I1031 17:56:34.410410  262782 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 17:56:34.410427  262782 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 17:56:34.410527  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:56:34.410749  262782 start.go:365] acquiring machines lock for multinode-441410-m02: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 17:56:34.410805  262782 start.go:369] acquired machines lock for "multinode-441410-m02" in 34.105µs
	I1031 17:56:34.410838  262782 start.go:93] Provisioning new machine with config: &{Name:multinode-441410 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-4
41410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1031 17:56:34.410944  262782 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1031 17:56:34.412645  262782 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1031 17:56:34.412740  262782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:56:34.412781  262782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:56:34.427853  262782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I1031 17:56:34.428335  262782 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:56:34.428909  262782 main.go:141] libmachine: Using API Version  1
	I1031 17:56:34.428934  262782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:56:34.429280  262782 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:56:34.429481  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:34.429649  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:34.429810  262782 start.go:159] libmachine.API.Create for "multinode-441410" (driver="kvm2")
	I1031 17:56:34.429843  262782 client.go:168] LocalClient.Create starting
	I1031 17:56:34.429884  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem
	I1031 17:56:34.429928  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.429950  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430027  262782 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem
	I1031 17:56:34.430075  262782 main.go:141] libmachine: Decoding PEM data...
	I1031 17:56:34.430092  262782 main.go:141] libmachine: Parsing certificate...
	I1031 17:56:34.430122  262782 main.go:141] libmachine: Running pre-create checks...
	I1031 17:56:34.430135  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .PreCreateCheck
	I1031 17:56:34.430340  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:34.430821  262782 main.go:141] libmachine: Creating machine...
	I1031 17:56:34.430837  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .Create
	I1031 17:56:34.430956  262782 main.go:141] libmachine: (multinode-441410-m02) Creating KVM machine...
	I1031 17:56:34.432339  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing default KVM network
	I1031 17:56:34.432459  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found existing private KVM network mk-multinode-441410
	I1031 17:56:34.432636  262782 main.go:141] libmachine: (multinode-441410-m02) Setting up store path in /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.432664  262782 main.go:141] libmachine: (multinode-441410-m02) Building disk image from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:56:34.432758  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.432647  263164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.432893  262782 main.go:141] libmachine: (multinode-441410-m02) Downloading /home/jenkins/minikube-integration/17530-243226/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso...
	I1031 17:56:34.660016  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.659852  263164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa...
	I1031 17:56:34.776281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776145  263164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk...
	I1031 17:56:34.776316  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing magic tar header
	I1031 17:56:34.776334  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Writing SSH key tar header
	I1031 17:56:34.776348  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:34.776277  263164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 ...
	I1031 17:56:34.776462  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02 (perms=drwx------)
	I1031 17:56:34.776495  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02
	I1031 17:56:34.776509  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube/machines (perms=drwxr-xr-x)
	I1031 17:56:34.776554  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube/machines
	I1031 17:56:34.776593  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:56:34.776620  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226/.minikube (perms=drwxr-xr-x)
	I1031 17:56:34.776639  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration/17530-243226 (perms=drwxrwxr-x)
	I1031 17:56:34.776655  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1031 17:56:34.776674  262782 main.go:141] libmachine: (multinode-441410-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1031 17:56:34.776689  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:34.776705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17530-243226
	I1031 17:56:34.776723  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1031 17:56:34.776739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home/jenkins
	I1031 17:56:34.776757  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Checking permissions on dir: /home
	I1031 17:56:34.776770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Skipping /home - not owner
	I1031 17:56:34.777511  262782 main.go:141] libmachine: (multinode-441410-m02) define libvirt domain using xml: 
	I1031 17:56:34.777538  262782 main.go:141] libmachine: (multinode-441410-m02) <domain type='kvm'>
	I1031 17:56:34.777547  262782 main.go:141] libmachine: (multinode-441410-m02)   <name>multinode-441410-m02</name>
	I1031 17:56:34.777553  262782 main.go:141] libmachine: (multinode-441410-m02)   <memory unit='MiB'>2200</memory>
	I1031 17:56:34.777562  262782 main.go:141] libmachine: (multinode-441410-m02)   <vcpu>2</vcpu>
	I1031 17:56:34.777572  262782 main.go:141] libmachine: (multinode-441410-m02)   <features>
	I1031 17:56:34.777585  262782 main.go:141] libmachine: (multinode-441410-m02)     <acpi/>
	I1031 17:56:34.777597  262782 main.go:141] libmachine: (multinode-441410-m02)     <apic/>
	I1031 17:56:34.777607  262782 main.go:141] libmachine: (multinode-441410-m02)     <pae/>
	I1031 17:56:34.777620  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.777652  262782 main.go:141] libmachine: (multinode-441410-m02)   </features>
	I1031 17:56:34.777680  262782 main.go:141] libmachine: (multinode-441410-m02)   <cpu mode='host-passthrough'>
	I1031 17:56:34.777694  262782 main.go:141] libmachine: (multinode-441410-m02)   
	I1031 17:56:34.777709  262782 main.go:141] libmachine: (multinode-441410-m02)   </cpu>
	I1031 17:56:34.777736  262782 main.go:141] libmachine: (multinode-441410-m02)   <os>
	I1031 17:56:34.777760  262782 main.go:141] libmachine: (multinode-441410-m02)     <type>hvm</type>
	I1031 17:56:34.777775  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='cdrom'/>
	I1031 17:56:34.777788  262782 main.go:141] libmachine: (multinode-441410-m02)     <boot dev='hd'/>
	I1031 17:56:34.777802  262782 main.go:141] libmachine: (multinode-441410-m02)     <bootmenu enable='no'/>
	I1031 17:56:34.777811  262782 main.go:141] libmachine: (multinode-441410-m02)   </os>
	I1031 17:56:34.777819  262782 main.go:141] libmachine: (multinode-441410-m02)   <devices>
	I1031 17:56:34.777828  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='cdrom'>
	I1031 17:56:34.777863  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/boot2docker.iso'/>
	I1031 17:56:34.777883  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hdc' bus='scsi'/>
	I1031 17:56:34.777895  262782 main.go:141] libmachine: (multinode-441410-m02)       <readonly/>
	I1031 17:56:34.777912  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777927  262782 main.go:141] libmachine: (multinode-441410-m02)     <disk type='file' device='disk'>
	I1031 17:56:34.777941  262782 main.go:141] libmachine: (multinode-441410-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1031 17:56:34.777959  262782 main.go:141] libmachine: (multinode-441410-m02)       <source file='/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/multinode-441410-m02.rawdisk'/>
	I1031 17:56:34.777971  262782 main.go:141] libmachine: (multinode-441410-m02)       <target dev='hda' bus='virtio'/>
	I1031 17:56:34.777984  262782 main.go:141] libmachine: (multinode-441410-m02)     </disk>
	I1031 17:56:34.777997  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778014  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='mk-multinode-441410'/>
	I1031 17:56:34.778029  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778052  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778074  262782 main.go:141] libmachine: (multinode-441410-m02)     <interface type='network'>
	I1031 17:56:34.778093  262782 main.go:141] libmachine: (multinode-441410-m02)       <source network='default'/>
	I1031 17:56:34.778107  262782 main.go:141] libmachine: (multinode-441410-m02)       <model type='virtio'/>
	I1031 17:56:34.778119  262782 main.go:141] libmachine: (multinode-441410-m02)     </interface>
	I1031 17:56:34.778137  262782 main.go:141] libmachine: (multinode-441410-m02)     <serial type='pty'>
	I1031 17:56:34.778153  262782 main.go:141] libmachine: (multinode-441410-m02)       <target port='0'/>
	I1031 17:56:34.778171  262782 main.go:141] libmachine: (multinode-441410-m02)     </serial>
	I1031 17:56:34.778190  262782 main.go:141] libmachine: (multinode-441410-m02)     <console type='pty'>
	I1031 17:56:34.778205  262782 main.go:141] libmachine: (multinode-441410-m02)       <target type='serial' port='0'/>
	I1031 17:56:34.778225  262782 main.go:141] libmachine: (multinode-441410-m02)     </console>
	I1031 17:56:34.778237  262782 main.go:141] libmachine: (multinode-441410-m02)     <rng model='virtio'>
	I1031 17:56:34.778251  262782 main.go:141] libmachine: (multinode-441410-m02)       <backend model='random'>/dev/random</backend>
	I1031 17:56:34.778262  262782 main.go:141] libmachine: (multinode-441410-m02)     </rng>
	I1031 17:56:34.778282  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778296  262782 main.go:141] libmachine: (multinode-441410-m02)     
	I1031 17:56:34.778314  262782 main.go:141] libmachine: (multinode-441410-m02)   </devices>
	I1031 17:56:34.778328  262782 main.go:141] libmachine: (multinode-441410-m02) </domain>
	I1031 17:56:34.778339  262782 main.go:141] libmachine: (multinode-441410-m02) 
	I1031 17:56:34.785231  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:58:c5:0e in network default
	I1031 17:56:34.785864  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring networks are active...
	I1031 17:56:34.785906  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:34.786721  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network default is active
	I1031 17:56:34.786980  262782 main.go:141] libmachine: (multinode-441410-m02) Ensuring network mk-multinode-441410 is active
	I1031 17:56:34.787275  262782 main.go:141] libmachine: (multinode-441410-m02) Getting domain xml...
	I1031 17:56:34.787971  262782 main.go:141] libmachine: (multinode-441410-m02) Creating domain...
	I1031 17:56:36.080509  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting to get IP...
	I1031 17:56:36.081281  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.081619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.081645  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.081592  263164 retry.go:31] will retry after 258.200759ms: waiting for machine to come up
	I1031 17:56:36.341301  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.341791  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.341815  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.341745  263164 retry.go:31] will retry after 256.5187ms: waiting for machine to come up
	I1031 17:56:36.600268  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.600770  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.600846  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.600774  263164 retry.go:31] will retry after 300.831329ms: waiting for machine to come up
	I1031 17:56:36.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:36.903718  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:36.903765  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:36.903649  263164 retry.go:31] will retry after 397.916823ms: waiting for machine to come up
	I1031 17:56:37.303280  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.303741  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.303767  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.303679  263164 retry.go:31] will retry after 591.313164ms: waiting for machine to come up
	I1031 17:56:37.896539  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:37.896994  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:37.897028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:37.896933  263164 retry.go:31] will retry after 746.76323ms: waiting for machine to come up
	I1031 17:56:38.644980  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:38.645411  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:38.645444  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:38.645362  263164 retry.go:31] will retry after 894.639448ms: waiting for machine to come up
	I1031 17:56:39.541507  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:39.541972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:39.542004  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:39.541919  263164 retry.go:31] will retry after 1.268987914s: waiting for machine to come up
	I1031 17:56:40.812461  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:40.812975  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:40.813017  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:40.812970  263164 retry.go:31] will retry after 1.237754647s: waiting for machine to come up
	I1031 17:56:42.052263  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:42.052759  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:42.052786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:42.052702  263164 retry.go:31] will retry after 2.053893579s: waiting for machine to come up
	I1031 17:56:44.108353  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:44.108908  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:44.108942  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:44.108849  263164 retry.go:31] will retry after 2.792545425s: waiting for machine to come up
	I1031 17:56:46.903313  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:46.903739  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:46.903786  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:46.903686  263164 retry.go:31] will retry after 3.58458094s: waiting for machine to come up
	I1031 17:56:50.491565  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:50.492028  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:50.492059  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:50.491969  263164 retry.go:31] will retry after 3.915273678s: waiting for machine to come up
	I1031 17:56:54.412038  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:54.412378  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find current IP address of domain multinode-441410-m02 in network mk-multinode-441410
	I1031 17:56:54.412404  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | I1031 17:56:54.412344  263164 retry.go:31] will retry after 3.672029289s: waiting for machine to come up
	I1031 17:56:58.087227  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.087711  262782 main.go:141] libmachine: (multinode-441410-m02) Found IP for machine: 192.168.39.59
	I1031 17:56:58.087749  262782 main.go:141] libmachine: (multinode-441410-m02) Reserving static IP address...
	I1031 17:56:58.087760  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has current primary IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.088068  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | unable to find host DHCP lease matching {name: "multinode-441410-m02", mac: "52:54:00:52:0b:10", ip: "192.168.39.59"} in network mk-multinode-441410
	I1031 17:56:58.166887  262782 main.go:141] libmachine: (multinode-441410-m02) Reserved static IP address: 192.168.39.59
	I1031 17:56:58.166922  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Getting to WaitForSSH function...
	I1031 17:56:58.166933  262782 main.go:141] libmachine: (multinode-441410-m02) Waiting for SSH to be available...
	I1031 17:56:58.169704  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170192  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.170232  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.170422  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH client type: external
	I1031 17:56:58.170448  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa (-rw-------)
	I1031 17:56:58.170483  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 17:56:58.170502  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | About to run SSH command:
	I1031 17:56:58.170520  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | exit 0
	I1031 17:56:58.266326  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | SSH cmd err, output: <nil>: 
	I1031 17:56:58.266581  262782 main.go:141] libmachine: (multinode-441410-m02) KVM machine creation complete!
	I1031 17:56:58.267031  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:56:58.267628  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.267889  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:58.268089  262782 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1031 17:56:58.268101  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 17:56:58.269541  262782 main.go:141] libmachine: Detecting operating system of created instance...
	I1031 17:56:58.269557  262782 main.go:141] libmachine: Waiting for SSH to be available...
	I1031 17:56:58.269563  262782 main.go:141] libmachine: Getting to WaitForSSH function...
	I1031 17:56:58.269575  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.272139  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272576  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.272619  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.272751  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.272982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273136  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.273287  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.273488  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.273892  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.273911  262782 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1031 17:56:58.397270  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.397299  262782 main.go:141] libmachine: Detecting the provisioner...
	I1031 17:56:58.397309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.400057  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400428  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.400470  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.400692  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.400952  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401108  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.401252  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.401441  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.401753  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.401766  262782 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1031 17:56:58.526613  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g532a87c-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1031 17:56:58.526726  262782 main.go:141] libmachine: found compatible host: buildroot
	I1031 17:56:58.526746  262782 main.go:141] libmachine: Provisioning with buildroot...
	I1031 17:56:58.526760  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527038  262782 buildroot.go:166] provisioning hostname "multinode-441410-m02"
	I1031 17:56:58.527068  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.527247  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.529972  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530385  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.530416  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.530601  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.530797  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.530945  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.531106  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.531270  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.531783  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.531804  262782 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-441410-m02 && echo "multinode-441410-m02" | sudo tee /etc/hostname
	I1031 17:56:58.671131  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-441410-m02
	
	I1031 17:56:58.671166  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.673933  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674369  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.674424  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.674600  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:58.674890  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675118  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:58.675345  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:58.675627  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:58.676021  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:58.676054  262782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-441410-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-441410-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-441410-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 17:56:58.810950  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 17:56:58.810979  262782 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 17:56:58.811009  262782 buildroot.go:174] setting up certificates
	I1031 17:56:58.811020  262782 provision.go:83] configureAuth start
	I1031 17:56:58.811030  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetMachineName
	I1031 17:56:58.811364  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:56:58.813974  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814319  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.814344  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.814535  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:58.817084  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817394  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:58.817421  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:58.817584  262782 provision.go:138] copyHostCerts
	I1031 17:56:58.817623  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817660  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 17:56:58.817672  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 17:56:58.817746  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 17:56:58.817839  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817865  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 17:56:58.817874  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 17:56:58.817902  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 17:56:58.817953  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.817971  262782 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 17:56:58.817978  262782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 17:56:58.818016  262782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 17:56:58.818116  262782 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.multinode-441410-m02 san=[192.168.39.59 192.168.39.59 localhost 127.0.0.1 minikube multinode-441410-m02]
	I1031 17:56:59.055735  262782 provision.go:172] copyRemoteCerts
	I1031 17:56:59.055809  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 17:56:59.055835  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.058948  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059556  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.059596  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.059846  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.060097  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.060358  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.060536  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:56:59.151092  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1031 17:56:59.151207  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 17:56:59.174844  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1031 17:56:59.174927  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1031 17:56:59.199057  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1031 17:56:59.199177  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 17:56:59.221051  262782 provision.go:86] duration metric: configureAuth took 410.017469ms
	I1031 17:56:59.221078  262782 buildroot.go:189] setting minikube options for container-runtime
	I1031 17:56:59.221284  262782 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:56:59.221309  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:56:59.221639  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.224435  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.224807  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.224850  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.225028  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.225266  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225453  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.225640  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.225805  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.226302  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.226321  262782 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 17:56:59.351775  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 17:56:59.351804  262782 buildroot.go:70] root file system type: tmpfs
	I1031 17:56:59.351962  262782 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 17:56:59.351982  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.354872  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355356  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.355388  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.355557  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.355790  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356021  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.356210  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.356384  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.356691  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.356751  262782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 17:56:59.494728  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 17:56:59.494771  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:56:59.497705  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498022  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:56:59.498083  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:56:59.498324  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:56:59.498532  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498711  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:56:59.498891  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:56:59.499114  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:56:59.499427  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:56:59.499446  262782 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 17:57:00.328643  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 17:57:00.328675  262782 main.go:141] libmachine: Checking connection to Docker...
	I1031 17:57:00.328688  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetURL
	I1031 17:57:00.330108  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | Using libvirt version 6000000
	I1031 17:57:00.332457  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.332894  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.332926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.333186  262782 main.go:141] libmachine: Docker is up and running!
	I1031 17:57:00.333204  262782 main.go:141] libmachine: Reticulating splines...
	I1031 17:57:00.333212  262782 client.go:171] LocalClient.Create took 25.903358426s
	I1031 17:57:00.333237  262782 start.go:167] duration metric: libmachine.API.Create for "multinode-441410" took 25.903429891s
	I1031 17:57:00.333246  262782 start.go:300] post-start starting for "multinode-441410-m02" (driver="kvm2")
	I1031 17:57:00.333256  262782 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 17:57:00.333275  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.333553  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 17:57:00.333581  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.336008  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336418  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.336451  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.336658  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.336878  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.337062  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.337210  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.427361  262782 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 17:57:00.431240  262782 command_runner.go:130] > NAME=Buildroot
	I1031 17:57:00.431269  262782 command_runner.go:130] > VERSION=2021.02.12-1-g532a87c-dirty
	I1031 17:57:00.431277  262782 command_runner.go:130] > ID=buildroot
	I1031 17:57:00.431285  262782 command_runner.go:130] > VERSION_ID=2021.02.12
	I1031 17:57:00.431300  262782 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1031 17:57:00.431340  262782 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 17:57:00.431363  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 17:57:00.431455  262782 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 17:57:00.431554  262782 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 17:57:00.431566  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /etc/ssl/certs/2504112.pem
	I1031 17:57:00.431653  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 17:57:00.440172  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:00.463049  262782 start.go:303] post-start completed in 129.785818ms
	I1031 17:57:00.463114  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetConfigRaw
	I1031 17:57:00.463739  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.466423  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.466890  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.466926  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.467267  262782 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410/config.json ...
	I1031 17:57:00.467464  262782 start.go:128] duration metric: createHost completed in 26.05650891s
	I1031 17:57:00.467498  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.469793  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470183  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.470219  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.470429  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.470653  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470826  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.470961  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.471252  262782 main.go:141] libmachine: Using SSH client type: native
	I1031 17:57:00.471597  262782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.39.59 22 <nil> <nil>}
	I1031 17:57:00.471610  262782 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 17:57:00.599316  262782 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698775020.573164169
	
	I1031 17:57:00.599344  262782 fix.go:206] guest clock: 1698775020.573164169
	I1031 17:57:00.599353  262782 fix.go:219] Guest: 2023-10-31 17:57:00.573164169 +0000 UTC Remote: 2023-10-31 17:57:00.467478074 +0000 UTC m=+101.189341224 (delta=105.686095ms)
	I1031 17:57:00.599370  262782 fix.go:190] guest clock delta is within tolerance: 105.686095ms
	I1031 17:57:00.599375  262782 start.go:83] releasing machines lock for "multinode-441410-m02", held for 26.188557851s
	I1031 17:57:00.599399  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.599772  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:00.602685  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.603107  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.603146  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.605925  262782 out.go:177] * Found network options:
	I1031 17:57:00.607687  262782 out.go:177]   - NO_PROXY=192.168.39.206
	W1031 17:57:00.609275  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.609328  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610043  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610273  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .DriverName
	I1031 17:57:00.610377  262782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 17:57:00.610408  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	W1031 17:57:00.610514  262782 proxy.go:119] fail to check proxy env: Error ip not in block
	I1031 17:57:00.610606  262782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1031 17:57:00.610632  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHHostname
	I1031 17:57:00.613237  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613322  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613590  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613626  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613769  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.613808  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:00.613848  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:00.613965  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHPort
	I1031 17:57:00.614137  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614171  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHKeyPath
	I1031 17:57:00.614304  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614355  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetSSHUsername
	I1031 17:57:00.614442  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.614524  262782 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/multinode-441410-m02/id_rsa Username:docker}
	I1031 17:57:00.704211  262782 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1031 17:57:00.740397  262782 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W1031 17:57:00.740471  262782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 17:57:00.740540  262782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 17:57:00.755704  262782 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1031 17:57:00.755800  262782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 17:57:00.755846  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.756065  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:00.775137  262782 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1031 17:57:00.775239  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 17:57:00.784549  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 17:57:00.793788  262782 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 17:57:00.793864  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 17:57:00.802914  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.811913  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 17:57:00.821043  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 17:57:00.829847  262782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 17:57:00.839148  262782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 17:57:00.849075  262782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 17:57:00.857656  262782 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1031 17:57:00.857741  262782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 17:57:00.866493  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:00.969841  262782 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 17:57:00.987133  262782 start.go:472] detecting cgroup driver to use...
	I1031 17:57:00.987211  262782 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 17:57:01.001129  262782 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1031 17:57:01.001952  262782 command_runner.go:130] > [Unit]
	I1031 17:57:01.001970  262782 command_runner.go:130] > Description=Docker Application Container Engine
	I1031 17:57:01.001976  262782 command_runner.go:130] > Documentation=https://docs.docker.com
	I1031 17:57:01.001981  262782 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1031 17:57:01.001986  262782 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1031 17:57:01.001992  262782 command_runner.go:130] > StartLimitBurst=3
	I1031 17:57:01.001996  262782 command_runner.go:130] > StartLimitIntervalSec=60
	I1031 17:57:01.002000  262782 command_runner.go:130] > [Service]
	I1031 17:57:01.002003  262782 command_runner.go:130] > Type=notify
	I1031 17:57:01.002008  262782 command_runner.go:130] > Restart=on-failure
	I1031 17:57:01.002013  262782 command_runner.go:130] > Environment=NO_PROXY=192.168.39.206
	I1031 17:57:01.002020  262782 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1031 17:57:01.002043  262782 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1031 17:57:01.002056  262782 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1031 17:57:01.002067  262782 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1031 17:57:01.002078  262782 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1031 17:57:01.002095  262782 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1031 17:57:01.002105  262782 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1031 17:57:01.002126  262782 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1031 17:57:01.002133  262782 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1031 17:57:01.002137  262782 command_runner.go:130] > ExecStart=
	I1031 17:57:01.002152  262782 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1031 17:57:01.002161  262782 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1031 17:57:01.002168  262782 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1031 17:57:01.002177  262782 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1031 17:57:01.002181  262782 command_runner.go:130] > LimitNOFILE=infinity
	I1031 17:57:01.002185  262782 command_runner.go:130] > LimitNPROC=infinity
	I1031 17:57:01.002189  262782 command_runner.go:130] > LimitCORE=infinity
	I1031 17:57:01.002195  262782 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1031 17:57:01.002201  262782 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1031 17:57:01.002205  262782 command_runner.go:130] > TasksMax=infinity
	I1031 17:57:01.002209  262782 command_runner.go:130] > TimeoutStartSec=0
	I1031 17:57:01.002215  262782 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1031 17:57:01.002220  262782 command_runner.go:130] > Delegate=yes
	I1031 17:57:01.002226  262782 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1031 17:57:01.002234  262782 command_runner.go:130] > KillMode=process
	I1031 17:57:01.002238  262782 command_runner.go:130] > [Install]
	I1031 17:57:01.002243  262782 command_runner.go:130] > WantedBy=multi-user.target
	I1031 17:57:01.002747  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.015488  262782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 17:57:01.039688  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 17:57:01.052508  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.065022  262782 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 17:57:01.092972  262782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 17:57:01.105692  262782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 17:57:01.122532  262782 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1031 17:57:01.122950  262782 ssh_runner.go:195] Run: which cri-dockerd
	I1031 17:57:01.126532  262782 command_runner.go:130] > /usr/bin/cri-dockerd
	I1031 17:57:01.126733  262782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 17:57:01.134826  262782 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 17:57:01.150492  262782 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 17:57:01.252781  262782 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 17:57:01.367390  262782 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 17:57:01.367451  262782 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 17:57:01.384227  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:01.485864  262782 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 17:57:02.890324  262782 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.404406462s)
	I1031 17:57:02.890472  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:02.994134  262782 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 17:57:03.106885  262782 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 17:57:03.221595  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.334278  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 17:57:03.352220  262782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 17:57:03.467540  262782 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 17:57:03.546367  262782 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 17:57:03.546431  262782 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 17:57:03.552162  262782 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1031 17:57:03.552190  262782 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1031 17:57:03.552200  262782 command_runner.go:130] > Device: 16h/22d	Inode: 975         Links: 1
	I1031 17:57:03.552210  262782 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1031 17:57:03.552219  262782 command_runner.go:130] > Access: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552227  262782 command_runner.go:130] > Modify: 2023-10-31 17:57:03.457897059 +0000
	I1031 17:57:03.552242  262782 command_runner.go:130] > Change: 2023-10-31 17:57:03.461902242 +0000
	I1031 17:57:03.552252  262782 command_runner.go:130] >  Birth: -
	I1031 17:57:03.552400  262782 start.go:540] Will wait 60s for crictl version
	I1031 17:57:03.552467  262782 ssh_runner.go:195] Run: which crictl
	I1031 17:57:03.556897  262782 command_runner.go:130] > /usr/bin/crictl
	I1031 17:57:03.556981  262782 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 17:57:03.612340  262782 command_runner.go:130] > Version:  0.1.0
	I1031 17:57:03.612371  262782 command_runner.go:130] > RuntimeName:  docker
	I1031 17:57:03.612376  262782 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1031 17:57:03.612384  262782 command_runner.go:130] > RuntimeApiVersion:  v1
	I1031 17:57:03.612402  262782 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 17:57:03.612450  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.638084  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.638269  262782 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 17:57:03.662703  262782 command_runner.go:130] > 24.0.6
	I1031 17:57:03.666956  262782 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 17:57:03.668586  262782 out.go:177]   - env NO_PROXY=192.168.39.206
	I1031 17:57:03.670298  262782 main.go:141] libmachine: (multinode-441410-m02) Calling .GetIP
	I1031 17:57:03.672869  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673251  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:0b:10", ip: ""} in network mk-multinode-441410: {Iface:virbr1 ExpiryTime:2023-10-31 18:56:49 +0000 UTC Type:0 Mac:52:54:00:52:0b:10 Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:multinode-441410-m02 Clientid:01:52:54:00:52:0b:10}
	I1031 17:57:03.673285  262782 main.go:141] libmachine: (multinode-441410-m02) DBG | domain multinode-441410-m02 has defined IP address 192.168.39.59 and MAC address 52:54:00:52:0b:10 in network mk-multinode-441410
	I1031 17:57:03.673497  262782 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 17:57:03.677874  262782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 17:57:03.689685  262782 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/multinode-441410 for IP: 192.168.39.59
	I1031 17:57:03.689730  262782 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 17:57:03.689916  262782 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 17:57:03.689978  262782 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 17:57:03.689996  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1031 17:57:03.690015  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1031 17:57:03.690065  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1031 17:57:03.690089  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1031 17:57:03.690286  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 17:57:03.690347  262782 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 17:57:03.690365  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 17:57:03.690401  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 17:57:03.690437  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 17:57:03.690475  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 17:57:03.690529  262782 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 17:57:03.690571  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.690595  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem -> /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.690614  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.691067  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 17:57:03.713623  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 17:57:03.737218  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 17:57:03.760975  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 17:57:03.789337  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 17:57:03.815440  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 17:57:03.837143  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 17:57:03.860057  262782 ssh_runner.go:195] Run: openssl version
	I1031 17:57:03.865361  262782 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1031 17:57:03.865549  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 17:57:03.876142  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880664  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880739  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.880807  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 17:57:03.886249  262782 command_runner.go:130] > b5213941
	I1031 17:57:03.886311  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 17:57:03.896461  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 17:57:03.907068  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911643  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911749  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.911820  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 17:57:03.917361  262782 command_runner.go:130] > 51391683
	I1031 17:57:03.917447  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 17:57:03.933000  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 17:57:03.947497  262782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.952830  262782 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953209  262782 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.953269  262782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 17:57:03.959961  262782 command_runner.go:130] > 3ec20f2e
	I1031 17:57:03.960127  262782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 17:57:03.970549  262782 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 17:57:03.974564  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974611  262782 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1031 17:57:03.974708  262782 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 17:57:04.000358  262782 command_runner.go:130] > cgroupfs
	I1031 17:57:04.000440  262782 cni.go:84] Creating CNI manager for ""
	I1031 17:57:04.000450  262782 cni.go:136] 2 nodes found, recommending kindnet
	I1031 17:57:04.000463  262782 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 17:57:04.000490  262782 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.59 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-441410 NodeName:multinode-441410-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 17:57:04.000691  262782 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-441410-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 17:57:04.000757  262782 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-441410-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-441410 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 17:57:04.000808  262782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.010640  262782 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	I1031 17:57:04.010691  262782 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.3': No such file or directory
	
	Initiating transfer...
	I1031 17:57:04.010744  262782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.3
	I1031 17:57:04.021036  262782 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256
	I1031 17:57:04.021037  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm
	I1031 17:57:04.021079  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl -> /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.021047  262782 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet
	I1031 17:57:04.021166  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl
	I1031 17:57:04.025888  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026030  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubectl': No such file or directory
	I1031 17:57:04.026084  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubectl --> /var/lib/minikube/binaries/v1.28.3/kubectl (49872896 bytes)
	I1031 17:57:09.997688  262782 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm -> /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:09.997775  262782 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm
	I1031 17:57:10.003671  262782 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003717  262782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.3/kubeadm': No such file or directory
	I1031 17:57:10.003742  262782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubeadm --> /var/lib/minikube/binaries/v1.28.3/kubeadm (49045504 bytes)
	I1031 17:57:10.242093  262782 out.go:177] 
	W1031 17:57:10.244016  262782 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: update node: downloading binaries: downloading kubelet: download failed: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubelet.sha256 Dst:/home/jenkins/minikube-integration/17530-243226/.minikube/cache/linux/amd64/v1.28.3/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20 0x4614e20] Decompressors:map[bz2:0xc000015f00 gz:0xc000015f08 tar:0xc000015ea0 tar.bz2:0xc000015eb0 tar.gz:0xc000015ec0 tar.xz:0xc000015ed0 tar.zst:0xc000015ef0 tbz2:0xc000015eb0 tgz:0xc000015ec0 txz:0xc000015ed0 tzst:0xc000015ef0 xz:0xc000015f10 zip:0xc000015f20 zst:0xc000015f18] Getters:map[file:0xc0027de5f0 http:0
xc0013cf4f0 https:0xc0013cf540] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.154.0.4:37952->151.101.193.55:443: read: connection reset by peer
	W1031 17:57:10.244041  262782 out.go:239] * 
	W1031 17:57:10.244911  262782 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 17:57:10.246517  262782 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:10:17 UTC. --
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.808688642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.807347360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810510452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810528647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:30 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:30.810538337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8ca440412b4f3430637fd159290abe187a7fc203fcc5642b2485672f91a518db/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:56:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04a78c282aa967688b556b9a1d080a34b542d36ec8d9940d8debaa555b7bcbd8/resolv.conf as [nameserver 192.168.122.1]"
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441875555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.441940642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443120429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.443137849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464627801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464781195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464813262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:56:31 multinode-441410 dockerd[1134]: time="2023-10-31T17:56:31.464840709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115698734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115788892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115818663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:13 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:13.115834877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:13 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/363b11b004cf7910e6872cbc82cf9fb787d2ad524ca406031b7514f116cb91fb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 31 17:57:15 multinode-441410 cri-dockerd[1014]: time="2023-10-31T17:57:15Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506722776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506845599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506905919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 17:57:15 multinode-441410 dockerd[1134]: time="2023-10-31T17:57:15.506918450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e514b5df78db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   363b11b004cf7       busybox-5bc68d56bd-682nc
	74195b9ce8448       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   04a78c282aa96       storage-provisioner
	cb6f76b4a1cc0       ead0a4a53df89                                                                                         13 minutes ago      Running             coredns                   0                   8ca440412b4f3       coredns-5dd5756b68-lwggp
	047c3eb3f0536       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              13 minutes ago      Running             kindnet-cni               0                   6400c9ed90ae3       kindnet-6rrkf
	b31ffb53919bb       bfc896cf80fba                                                                                         13 minutes ago      Running             kube-proxy                0                   be482a709e293       kube-proxy-tbl8r
	d67e21eeb5b77       6d1b4fd1b182d                                                                                         14 minutes ago      Running             kube-scheduler            0                   ca4a1ea8cc92e       kube-scheduler-multinode-441410
	d7e5126106718       73deb9a3f7025                                                                                         14 minutes ago      Running             etcd                      0                   ccf9be12e6982       etcd-multinode-441410
	12eb3fb3a41b0       10baa1ca17068                                                                                         14 minutes ago      Running             kube-controller-manager   0                   c8c98af031813       kube-controller-manager-multinode-441410
	1cf5febbb4d5f       5374347291230                                                                                         14 minutes ago      Running             kube-apiserver            0                   8af0572aaf117       kube-apiserver-multinode-441410
	
	* 
	* ==> coredns [cb6f76b4a1cc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50699 - 124 "HINFO IN 6967170714003633987.9075705449036268494. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012164893s
	[INFO] 10.244.0.3:41511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000461384s
	[INFO] 10.244.0.3:47664 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.010903844s
	[INFO] 10.244.0.3:45546 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.015010309s
	[INFO] 10.244.0.3:36607 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011237302s
	[INFO] 10.244.0.3:48310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142792s
	[INFO] 10.244.0.3:52370 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002904808s
	[INFO] 10.244.0.3:47454 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150911s
	[INFO] 10.244.0.3:59669 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081418s
	[INFO] 10.244.0.3:46795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005958126s
	[INFO] 10.244.0.3:60027 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132958s
	[INFO] 10.244.0.3:52394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072131s
	[INFO] 10.244.0.3:33935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070128s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075594s
	[INFO] 10.244.0.3:45061 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000057395s
	[INFO] 10.244.0.3:42068 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048863s
	[INFO] 10.244.0.3:37779 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031797s
	[INFO] 10.244.0.3:60205 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093356s
	[INFO] 10.244.0.3:39779 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119857s
	[INFO] 10.244.0.3:45984 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000097797s
	[INFO] 10.244.0.3:59468 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091924s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-441410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=multinode-441410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T17_56_07_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 17:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:10:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:07:50 +0000   Tue, 31 Oct 2023 17:56:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    multinode-441410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a75f981009b84441b4426f6da95c3105
	  System UUID:                a75f9810-09b8-4441-b442-6f6da95c3105
	  Boot ID:                    20c74b20-ee02-4aec-b46a-2d5585acaca4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-682nc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-lwggp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-441410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-6rrkf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-441410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-441410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-tbl8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-441410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-441410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-441410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-441410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node multinode-441410 event: Registered Node multinode-441410 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-441410 status is now: NodeReady
	
	
	Name:               multinode-441410-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-441410-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 18:10:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-441410-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 Oct 2023 18:10:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:10:14 +0000   Tue, 31 Oct 2023 18:10:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:10:14 +0000   Tue, 31 Oct 2023 18:10:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:10:14 +0000   Tue, 31 Oct 2023 18:10:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:10:14 +0000   Tue, 31 Oct 2023 18:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-441410-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b3d12434efc4b28b1f56666426107d6
	  System UUID:                2b3d1243-4efc-4b28-b1f5-6666426107d6
	  Boot ID:                    6b057e04-c2c4-43de-9b45-cd047edea1b1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9hq7l       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      63s
	  kube-system                 kube-proxy-c9rvt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 57s                kube-proxy  
	  Normal  Starting                 9s                 kube-proxy  
	  Normal  NodeHasSufficientMemory  63s (x5 over 65s)  kubelet     Node multinode-441410-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x5 over 65s)  kubelet     Node multinode-441410-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x5 over 65s)  kubelet     Node multinode-441410-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                48s                kubelet     Node multinode-441410-m03 status is now: NodeReady
	  Normal  Starting                 12s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)  kubelet     Node multinode-441410-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)  kubelet     Node multinode-441410-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)  kubelet     Node multinode-441410-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                 kubelet     Node multinode-441410-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.062130] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.341199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.937118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139606] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.028034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.511569] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.107035] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +1.121853] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +0.293645] systemd-fstab-generator[777]: Ignoring "noauto" for root device
	[  +0.101803] systemd-fstab-generator[788]: Ignoring "noauto" for root device
	[  +0.117538] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +1.501378] systemd-fstab-generator[959]: Ignoring "noauto" for root device
	[  +0.120138] systemd-fstab-generator[970]: Ignoring "noauto" for root device
	[  +0.103289] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +0.118380] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.131035] systemd-fstab-generator[1006]: Ignoring "noauto" for root device
	[  +4.317829] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +4.058636] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.605200] systemd-fstab-generator[1504]: Ignoring "noauto" for root device
	[  +0.446965] kauditd_printk_skb: 29 callbacks suppressed
	[Oct31 17:56] systemd-fstab-generator[2441]: Ignoring "noauto" for root device
	[ +21.444628] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [d7e512610671] <==
	* {"level":"info","ts":"2023-10-31T17:56:00.8535Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2023-10-31T17:56:00.859687Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-31T17:56:00.859811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-31T17:56:01.665675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 1"}
	{"level":"info","ts":"2023-10-31T17:56:01.665781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.665802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2023-10-31T17:56:01.667453Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.66893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:multinode-441410 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-31T17:56:01.668955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.669814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.670156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-31T17:56:01.671056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.671176Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.673505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2023-10-31T17:56:01.67448Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-31T17:56:01.705344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:01.705462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-31T17:56:26.903634Z","caller":"traceutil/trace.go:171","msg":"trace[1217831514] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"116.90774ms","start":"2023-10-31T17:56:26.786707Z","end":"2023-10-31T17:56:26.903615Z","steps":["trace[1217831514] 'process raft request'  (duration: 116.406724ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-31T18:06:01.735722Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":693}
	{"level":"info","ts":"2023-10-31T18:06:01.739705Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":693,"took":"3.294185ms","hash":411838697}
	{"level":"info","ts":"2023-10-31T18:06:01.739888Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":411838697,"revision":693,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  18:10:18 up 14 min,  0 users,  load average: 0.31, 0.34, 0.22
	Linux multinode-441410 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [047c3eb3f053] <==
	* I1031 18:09:18.637526       1 main.go:227] handling current node
	I1031 18:09:18.637616       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:18.637763       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:18.638179       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.127 Flags: [] Table: 0} 
	I1031 18:09:28.646550       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:28.646574       1 main.go:227] handling current node
	I1031 18:09:28.646588       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:28.646593       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:38.658930       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:38.658961       1 main.go:227] handling current node
	I1031 18:09:38.658979       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:38.658984       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:48.664514       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:48.664698       1 main.go:227] handling current node
	I1031 18:09:48.664731       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:48.664748       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:09:58.669613       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:09:58.669742       1 main.go:227] handling current node
	I1031 18:09:58.669765       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:09:58.669779       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.1.0/24] 
	I1031 18:10:08.675885       1 main.go:223] Handling node with IPs: map[192.168.39.206:{}]
	I1031 18:10:08.676158       1 main.go:227] handling current node
	I1031 18:10:08.676353       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I1031 18:10:08.676483       1 main.go:250] Node multinode-441410-m03 has CIDR [10.244.2.0/24] 
	I1031 18:10:08.676830       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.127 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [1cf5febbb4d5] <==
	* I1031 17:56:03.297486       1 shared_informer.go:318] Caches are synced for configmaps
	I1031 17:56:03.297922       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1031 17:56:03.298095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 17:56:03.296411       1 controller.go:624] quota admission added evaluator for: namespaces
	I1031 17:56:03.298617       1 aggregator.go:166] initial CRD sync complete...
	I1031 17:56:03.298758       1 autoregister_controller.go:141] Starting autoregister controller
	I1031 17:56:03.298831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 17:56:03.298934       1 cache.go:39] Caches are synced for autoregister controller
	E1031 17:56:03.331582       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1031 17:56:03.538063       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 17:56:04.199034       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1031 17:56:04.204935       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1031 17:56:04.204985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 17:56:04.843769       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 17:56:04.907235       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 17:56:05.039995       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1031 17:56:05.052137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.206]
	I1031 17:56:05.053161       1 controller.go:624] quota admission added evaluator for: endpoints
	I1031 17:56:05.058951       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1031 17:56:05.257178       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1031 17:56:06.531069       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1031 17:56:06.548236       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1031 17:56:06.565431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1031 17:56:18.632989       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1031 17:56:18.982503       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [12eb3fb3a41b] <==
	* I1031 17:56:30.353922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.815µs"
	I1031 17:56:30.385706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.335µs"
	I1031 17:56:32.673652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="201.04µs"
	I1031 17:56:32.726325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.70151ms"
	I1031 17:56:32.728902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.63µs"
	I1031 17:56:33.080989       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1031 17:57:12.661640       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1031 17:57:12.679843       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-682nc"
	I1031 17:57:12.692916       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-67pbp"
	I1031 17:57:12.724024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.449933ms"
	I1031 17:57:12.739655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.513683ms"
	I1031 17:57:12.756995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.066176ms"
	I1031 17:57:12.757435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.002µs"
	I1031 17:57:16.065577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.601668ms"
	I1031 17:57:16.065747       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.752µs"
	I1031 18:09:15.207912       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-441410-m03\" does not exist"
	I1031 18:09:15.231014       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-441410-m03" podCIDRs=["10.244.1.0/24"]
	I1031 18:09:15.237884       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9hq7l"
	I1031 18:09:15.237930       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-c9rvt"
	I1031 18:09:18.211568       1 event.go:307] "Event occurred" object="multinode-441410-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-441410-m03 event: Registered Node multinode-441410-m03 in Controller"
	I1031 18:09:18.212158       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-441410-m03"
	I1031 18:09:30.048381       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-441410-m03"
	I1031 18:10:06.485027       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-441410-m03\" does not exist"
	I1031 18:10:06.493638       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-441410-m03" podCIDRs=["10.244.2.0/24"]
	I1031 18:10:14.798532       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-441410-m03"
	
	* 
	* ==> kube-proxy [b31ffb53919b] <==
	* I1031 17:56:20.251801       1 server_others.go:69] "Using iptables proxy"
	I1031 17:56:20.273468       1 node.go:141] Successfully retrieved node IP: 192.168.39.206
	I1031 17:56:20.432578       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1031 17:56:20.432606       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1031 17:56:20.435879       1 server_others.go:152] "Using iptables Proxier"
	I1031 17:56:20.436781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1031 17:56:20.437069       1 server.go:846] "Version info" version="v1.28.3"
	I1031 17:56:20.437107       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 17:56:20.439642       1 config.go:188] "Starting service config controller"
	I1031 17:56:20.440338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1031 17:56:20.440429       1 config.go:97] "Starting endpoint slice config controller"
	I1031 17:56:20.440436       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1031 17:56:20.443901       1 config.go:315] "Starting node config controller"
	I1031 17:56:20.443942       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1031 17:56:20.541521       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1031 17:56:20.541587       1 shared_informer.go:318] Caches are synced for service config
	I1031 17:56:20.544432       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d67e21eeb5b7] <==
	* W1031 17:56:03.311598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:03.311633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:03.311722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:03.311751       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.159485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 17:56:04.159532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1031 17:56:04.217824       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1031 17:56:04.218047       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 17:56:04.232082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.232346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.260140       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 17:56:04.260192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1031 17:56:04.276153       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 17:56:04.276245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1031 17:56:04.362193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 17:56:04.362352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1031 17:56:04.401747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 17:56:04.402094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1031 17:56:04.474111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 17:56:04.474225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1031 17:56:04.532359       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1031 17:56:04.532393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1031 17:56:04.554134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 17:56:04.554242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1031 17:56:06.181676       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 17:55:31 UTC, ends at Tue 2023-10-31 18:10:18 UTC. --
	Oct 31 18:04:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:05:06 multinode-441410 kubelet[2461]: E1031 18:05:06.810106    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:05:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:06:06 multinode-441410 kubelet[2461]: E1031 18:06:06.809899    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:06:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:07:06 multinode-441410 kubelet[2461]: E1031 18:07:06.809480    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:07:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:08:06 multinode-441410 kubelet[2461]: E1031 18:08:06.809111    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:08:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:09:06 multinode-441410 kubelet[2461]: E1031 18:09:06.811861    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:09:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 31 18:10:06 multinode-441410 kubelet[2461]: E1031 18:10:06.809833    2461 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 31 18:10:06 multinode-441410 kubelet[2461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 31 18:10:06 multinode-441410 kubelet[2461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 31 18:10:06 multinode-441410 kubelet[2461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-441410 -n multinode-441410
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-441410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-5bc68d56bd-67pbp
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp
helpers_test.go:282: (dbg) kubectl --context multinode-441410 describe pod busybox-5bc68d56bd-67pbp:

                                                
                                                
-- stdout --
	Name:             busybox-5bc68d56bd-67pbp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=5bc68d56bd
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-5bc68d56bd
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thnn2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-thnn2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m42s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (35.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-976044 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-976044 "sudo crictl images -o json": exit status 1 (271.159189ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-976044 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-976044 -n old-k8s-version-976044
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-976044 logs -n 25
E1031 18:47:26.793353  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-976044 logs -n 25: (1.033385094s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-235459  | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:39 UTC | 31 Oct 23 18:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:39 UTC | 31 Oct 23 18:39 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-235459       | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:39 UTC | 31 Oct 23 18:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:39 UTC | 31 Oct 23 18:45 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |                |                     |                     |
	| ssh     | -p no-preload-799191 sudo                              | no-preload-799191            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | crictl images -o json                                  |                              |         |                |                     |                     |
	| pause   | -p no-preload-799191                                   | no-preload-799191            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p no-preload-799191                                   | no-preload-799191            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p no-preload-799191                                   | no-preload-799191            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	| delete  | -p no-preload-799191                                   | no-preload-799191            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	| start   | -p newest-cni-556434 --memory=2200 --alsologtostderr   | newest-cni-556434            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |                |                     |                     |
	| ssh     | -p embed-certs-189930 sudo                             | embed-certs-189930           | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | crictl images -o json                                  |                              |         |                |                     |                     |
	| pause   | -p embed-certs-189930                                  | embed-certs-189930           | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p embed-certs-189930                                  | embed-certs-189930           | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p embed-certs-189930                                  | embed-certs-189930           | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	| delete  | -p embed-certs-189930                                  | embed-certs-189930           | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:45 UTC | 31 Oct 23 18:45 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |                |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-235459 | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | default-k8s-diff-port-235459                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-556434             | newest-cni-556434            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-556434                                   | newest-cni-556434            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-556434                  | newest-cni-556434            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC | 31 Oct 23 18:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-556434 --memory=2200 --alsologtostderr   | newest-cni-556434            | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:46 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |                |                     |                     |
	| ssh     | -p old-k8s-version-976044 sudo                         | old-k8s-version-976044       | jenkins | v1.32.0-beta.0 | 31 Oct 23 18:47 UTC |                     |
	|         | crictl images -o json                                  |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 18:46:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:46:42.311192  300343 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:46:42.311326  300343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:46:42.311334  300343 out.go:309] Setting ErrFile to fd 2...
	I1031 18:46:42.311339  300343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:46:42.311510  300343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 18:46:42.312102  300343 out.go:303] Setting JSON to false
	I1031 18:46:42.313076  300343 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8913,"bootTime":1698769090,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:46:42.313140  300343 start.go:138] virtualization: kvm guest
	I1031 18:46:42.315674  300343 out.go:177] * [newest-cni-556434] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:46:42.317302  300343 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 18:46:42.317358  300343 notify.go:220] Checking for updates...
	I1031 18:46:42.318841  300343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:46:42.320304  300343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 18:46:42.321803  300343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 18:46:42.323234  300343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:46:42.324533  300343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 18:46:42.326525  300343 config.go:182] Loaded profile config "newest-cni-556434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:46:42.326944  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:46:42.327007  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:46:42.344972  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I1031 18:46:42.345466  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:46:42.346003  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:46:42.346028  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:46:42.346473  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:46:42.346738  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:46:42.347052  300343 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 18:46:42.347357  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:46:42.347398  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:46:42.361988  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I1031 18:46:42.362457  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:46:42.363030  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:46:42.363061  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:46:42.363399  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:46:42.363601  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:46:42.401923  300343 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:46:42.403481  300343 start.go:298] selected driver: kvm2
	I1031 18:46:42.403495  300343 start.go:902] validating driver "kvm2" against &{Name:newest-cni-556434 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-556
434 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 18:46:42.403684  300343 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:46:42.404402  300343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:46:42.404491  300343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:46:42.420518  300343 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 18:46:42.421012  300343 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1031 18:46:42.421093  300343 cni.go:84] Creating CNI manager for ""
	I1031 18:46:42.421113  300343 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1031 18:46:42.421126  300343 start_flags.go:323] config:
	{Name:newest-cni-556434 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-556434 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 18:46:42.421395  300343 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:46:42.425125  300343 out.go:177] * Starting control plane node newest-cni-556434 in cluster newest-cni-556434
	I1031 18:46:42.426491  300343 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 18:46:42.426532  300343 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 18:46:42.426540  300343 cache.go:56] Caching tarball of preloaded images
	I1031 18:46:42.426677  300343 preload.go:174] Found /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 18:46:42.426702  300343 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1031 18:46:42.426835  300343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/config.json ...
	I1031 18:46:42.427075  300343 start.go:365] acquiring machines lock for newest-cni-556434: {Name:mk5db143762c10037b1ef8f9624c38e498b05186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:46:42.427154  300343 start.go:369] acquired machines lock for "newest-cni-556434" in 46.813µs
	I1031 18:46:42.427182  300343 start.go:96] Skipping create...Using existing machine configuration
	I1031 18:46:42.427198  300343 fix.go:54] fixHost starting: 
	I1031 18:46:42.427492  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:46:42.427525  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:46:42.442211  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I1031 18:46:42.442672  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:46:42.443222  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:46:42.443254  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:46:42.443584  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:46:42.443826  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:46:42.444000  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:46:42.445879  300343 fix.go:102] recreateIfNeeded on newest-cni-556434: state=Stopped err=<nil>
	I1031 18:46:42.445928  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	W1031 18:46:42.446131  300343 fix.go:128] unexpected machine state, will restart: <nil>
	I1031 18:46:42.448233  300343 out.go:177] * Restarting existing kvm2 VM for "newest-cni-556434" ...
	I1031 18:46:42.622713  296974 system_pods.go:86] 4 kube-system pods found
	I1031 18:46:42.622747  296974 system_pods.go:89] "coredns-5644d7b6d9-km2s9" [c0cae151-f060-4074-8a25-8263d20ff0e3] Running
	I1031 18:46:42.622752  296974 system_pods.go:89] "kube-proxy-4hrrh" [79b995b9-f4a9-4ad8-9e2b-24351ce716be] Running
	I1031 18:46:42.622758  296974 system_pods.go:89] "metrics-server-74d5856cc6-b92hb" [1434154b-4282-4cf1-a3a5-b925ccd76d30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 18:46:42.622764  296974 system_pods.go:89] "storage-provisioner" [02b16282-0aad-460d-8717-8198563a22eb] Running
	I1031 18:46:42.622782  296974 retry.go:31] will retry after 9.400099364s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 18:46:42.449586  300343 main.go:141] libmachine: (newest-cni-556434) Calling .Start
	I1031 18:46:42.449818  300343 main.go:141] libmachine: (newest-cni-556434) Ensuring networks are active...
	I1031 18:46:42.450830  300343 main.go:141] libmachine: (newest-cni-556434) Ensuring network default is active
	I1031 18:46:42.451203  300343 main.go:141] libmachine: (newest-cni-556434) Ensuring network mk-newest-cni-556434 is active
	I1031 18:46:42.451766  300343 main.go:141] libmachine: (newest-cni-556434) Getting domain xml...
	I1031 18:46:42.452555  300343 main.go:141] libmachine: (newest-cni-556434) Creating domain...
	I1031 18:46:43.755763  300343 main.go:141] libmachine: (newest-cni-556434) Waiting to get IP...
	I1031 18:46:43.756659  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:43.757107  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:43.757192  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:43.757077  300379 retry.go:31] will retry after 194.990965ms: waiting for machine to come up
	I1031 18:46:43.953830  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:43.954480  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:43.954515  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:43.954420  300379 retry.go:31] will retry after 244.914708ms: waiting for machine to come up
	I1031 18:46:44.201021  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:44.201585  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:44.201616  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:44.201531  300379 retry.go:31] will retry after 342.713782ms: waiting for machine to come up
	I1031 18:46:44.546303  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:44.546956  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:44.546986  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:44.546903  300379 retry.go:31] will retry after 410.231758ms: waiting for machine to come up
	I1031 18:46:44.958445  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:44.958985  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:44.959014  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:44.958924  300379 retry.go:31] will retry after 647.070066ms: waiting for machine to come up
	I1031 18:46:45.607177  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:45.607593  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:45.607627  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:45.607549  300379 retry.go:31] will retry after 722.661712ms: waiting for machine to come up
	I1031 18:46:46.331589  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:46.332075  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:46.332110  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:46.332030  300379 retry.go:31] will retry after 960.681214ms: waiting for machine to come up
	I1031 18:46:47.294093  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:47.294649  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:47.294677  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:47.294577  300379 retry.go:31] will retry after 961.389847ms: waiting for machine to come up
	I1031 18:46:48.257693  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:48.258182  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:48.258214  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:48.258120  300379 retry.go:31] will retry after 1.596866773s: waiting for machine to come up
	I1031 18:46:49.857040  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:49.857530  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:49.857564  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:49.857472  300379 retry.go:31] will retry after 2.067389316s: waiting for machine to come up
	I1031 18:46:51.927803  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:51.928346  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:51.928375  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:51.928275  300379 retry.go:31] will retry after 2.333304149s: waiting for machine to come up
	I1031 18:46:52.029041  296974 system_pods.go:86] 6 kube-system pods found
	I1031 18:46:52.029113  296974 system_pods.go:89] "coredns-5644d7b6d9-km2s9" [c0cae151-f060-4074-8a25-8263d20ff0e3] Running
	I1031 18:46:52.029121  296974 system_pods.go:89] "etcd-old-k8s-version-976044" [36f8379a-8b91-40e6-af26-000164de9550] Pending
	I1031 18:46:52.029125  296974 system_pods.go:89] "kube-apiserver-old-k8s-version-976044" [947483c2-5505-4fbd-9e0b-9db16d748fba] Pending
	I1031 18:46:52.029131  296974 system_pods.go:89] "kube-proxy-4hrrh" [79b995b9-f4a9-4ad8-9e2b-24351ce716be] Running
	I1031 18:46:52.029140  296974 system_pods.go:89] "metrics-server-74d5856cc6-b92hb" [1434154b-4282-4cf1-a3a5-b925ccd76d30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 18:46:52.029146  296974 system_pods.go:89] "storage-provisioner" [02b16282-0aad-460d-8717-8198563a22eb] Running
	I1031 18:46:52.029164  296974 retry.go:31] will retry after 9.922196179s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1031 18:46:54.263942  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:54.264406  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:54.264434  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:54.264360  300379 retry.go:31] will retry after 2.36815628s: waiting for machine to come up
	I1031 18:46:56.635913  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:56.636306  300343 main.go:141] libmachine: (newest-cni-556434) DBG | unable to find current IP address of domain newest-cni-556434 in network mk-newest-cni-556434
	I1031 18:46:56.636330  300343 main.go:141] libmachine: (newest-cni-556434) DBG | I1031 18:46:56.636221  300379 retry.go:31] will retry after 2.891457408s: waiting for machine to come up
	I1031 18:46:59.530338  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.530859  300343 main.go:141] libmachine: (newest-cni-556434) Found IP for machine: 192.168.50.86
	I1031 18:46:59.530895  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has current primary IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.530906  300343 main.go:141] libmachine: (newest-cni-556434) Reserving static IP address...
	I1031 18:46:59.531301  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "newest-cni-556434", mac: "52:54:00:0b:f5:a3", ip: "192.168.50.86"} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.531327  300343 main.go:141] libmachine: (newest-cni-556434) Reserved static IP address: 192.168.50.86
	I1031 18:46:59.531337  300343 main.go:141] libmachine: (newest-cni-556434) DBG | skip adding static IP to network mk-newest-cni-556434 - found existing host DHCP lease matching {name: "newest-cni-556434", mac: "52:54:00:0b:f5:a3", ip: "192.168.50.86"}
	I1031 18:46:59.531358  300343 main.go:141] libmachine: (newest-cni-556434) DBG | Getting to WaitForSSH function...
	I1031 18:46:59.531379  300343 main.go:141] libmachine: (newest-cni-556434) Waiting for SSH to be available...
	I1031 18:46:59.533178  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.533527  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.533565  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.533655  300343 main.go:141] libmachine: (newest-cni-556434) DBG | Using SSH client type: external
	I1031 18:46:59.533687  300343 main.go:141] libmachine: (newest-cni-556434) DBG | Using SSH private key: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa (-rw-------)
	I1031 18:46:59.533736  300343 main.go:141] libmachine: (newest-cni-556434) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 18:46:59.533760  300343 main.go:141] libmachine: (newest-cni-556434) DBG | About to run SSH command:
	I1031 18:46:59.533777  300343 main.go:141] libmachine: (newest-cni-556434) DBG | exit 0
	I1031 18:46:59.629628  300343 main.go:141] libmachine: (newest-cni-556434) DBG | SSH cmd err, output: <nil>: 
	I1031 18:46:59.629928  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetConfigRaw
	I1031 18:46:59.630561  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetIP
	I1031 18:46:59.632728  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.633120  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.633151  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.633453  300343 profile.go:148] Saving config to /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/config.json ...
	I1031 18:46:59.633675  300343 machine.go:88] provisioning docker machine ...
	I1031 18:46:59.633699  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:46:59.633988  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetMachineName
	I1031 18:46:59.634210  300343 buildroot.go:166] provisioning hostname "newest-cni-556434"
	I1031 18:46:59.634237  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetMachineName
	I1031 18:46:59.634438  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:46:59.636832  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.637254  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.637286  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.637387  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:46:59.637573  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:46:59.637694  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:46:59.637851  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:46:59.638007  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:46:59.638371  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:46:59.638386  300343 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-556434 && echo "newest-cni-556434" | sudo tee /etc/hostname
	I1031 18:46:59.778226  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-556434
	
	I1031 18:46:59.778254  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:46:59.781250  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.781587  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.781627  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.781804  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:46:59.782062  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:46:59.782221  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:46:59.782376  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:46:59.782519  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:46:59.782880  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:46:59.782910  300343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-556434' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-556434/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-556434' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 18:46:59.917012  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:46:59.917050  300343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17530-243226/.minikube CaCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17530-243226/.minikube}
	I1031 18:46:59.917076  300343 buildroot.go:174] setting up certificates
	I1031 18:46:59.917089  300343 provision.go:83] configureAuth start
	I1031 18:46:59.917108  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetMachineName
	I1031 18:46:59.917429  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetIP
	I1031 18:46:59.920548  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.920945  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.920980  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.921094  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:46:59.923668  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.923995  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:46:59.924026  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:46:59.924160  300343 provision.go:138] copyHostCerts
	I1031 18:46:59.924220  300343 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem, removing ...
	I1031 18:46:59.924234  300343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem
	I1031 18:46:59.924298  300343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/key.pem (1679 bytes)
	I1031 18:46:59.924380  300343 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem, removing ...
	I1031 18:46:59.924388  300343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem
	I1031 18:46:59.924417  300343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/ca.pem (1082 bytes)
	I1031 18:46:59.924468  300343 exec_runner.go:144] found /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem, removing ...
	I1031 18:46:59.924474  300343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem
	I1031 18:46:59.924493  300343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17530-243226/.minikube/cert.pem (1123 bytes)
	I1031 18:46:59.924536  300343 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem org=jenkins.newest-cni-556434 san=[192.168.50.86 192.168.50.86 localhost 127.0.0.1 minikube newest-cni-556434]
	I1031 18:47:00.313869  300343 provision.go:172] copyRemoteCerts
	I1031 18:47:00.313942  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 18:47:00.313969  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:00.316730  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.317015  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:00.317051  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.317199  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:00.317423  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.317608  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:00.317770  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:00.410641  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 18:47:00.432087  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1031 18:47:00.453318  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1031 18:47:00.475274  300343 provision.go:86] duration metric: configureAuth took 558.162971ms
	I1031 18:47:00.475312  300343 buildroot.go:189] setting minikube options for container-runtime
	I1031 18:47:00.475559  300343 config.go:182] Loaded profile config "newest-cni-556434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:47:00.475595  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:00.475897  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:00.478491  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.478840  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:00.478885  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.478986  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:00.479184  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.479325  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.479560  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:00.479765  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:47:00.480108  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:47:00.480123  300343 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 18:47:00.611372  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 18:47:00.611413  300343 buildroot.go:70] root file system type: tmpfs
	I1031 18:47:00.611604  300343 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 18:47:00.611639  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:00.614824  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.615195  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:00.615222  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.615440  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:00.615652  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.615826  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.615953  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:00.616099  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:47:00.616438  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:47:00.616496  300343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 18:47:00.761429  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 18:47:00.761470  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:00.764725  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.765211  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:00.765247  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:00.765473  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:00.765694  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.765898  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:00.766090  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:00.766270  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:47:00.766734  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:47:00.766763  300343 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 18:47:01.620447  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1031 18:47:01.620508  300343 machine.go:91] provisioned docker machine in 1.986802258s
	I1031 18:47:01.620524  300343 start.go:300] post-start starting for "newest-cni-556434" (driver="kvm2")
	I1031 18:47:01.620539  300343 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 18:47:01.620566  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:01.620921  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 18:47:01.620951  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:01.623880  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.624284  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:01.624316  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.624455  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:01.624690  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:01.624905  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:01.625075  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:01.725880  300343 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 18:47:01.731380  300343 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 18:47:01.731414  300343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/addons for local assets ...
	I1031 18:47:01.731489  300343 filesync.go:126] Scanning /home/jenkins/minikube-integration/17530-243226/.minikube/files for local assets ...
	I1031 18:47:01.731604  300343 filesync.go:149] local asset: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem -> 2504112.pem in /etc/ssl/certs
	I1031 18:47:01.731705  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 18:47:01.740707  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 18:47:01.762151  300343 start.go:303] post-start completed in 141.60565ms
	I1031 18:47:01.762188  300343 fix.go:56] fixHost completed within 19.33498955s
	I1031 18:47:01.762219  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:01.764919  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.765285  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:01.765319  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.765522  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:01.765761  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:01.765932  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:01.766149  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:01.766341  300343 main.go:141] libmachine: Using SSH client type: native
	I1031 18:47:01.766667  300343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f7ac0] 0x7fa7a0 <nil>  [] 0s} 192.168.50.86 22 <nil> <nil>}
	I1031 18:47:01.766678  300343 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 18:47:01.894581  300343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698778021.846715951
	
	I1031 18:47:01.894626  300343 fix.go:206] guest clock: 1698778021.846715951
	I1031 18:47:01.894639  300343 fix.go:219] Guest: 2023-10-31 18:47:01.846715951 +0000 UTC Remote: 2023-10-31 18:47:01.762193477 +0000 UTC m=+19.502011979 (delta=84.522474ms)
	I1031 18:47:01.894668  300343 fix.go:190] guest clock delta is within tolerance: 84.522474ms
	I1031 18:47:01.894675  300343 start.go:83] releasing machines lock for "newest-cni-556434", held for 19.467506138s
	I1031 18:47:01.894703  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:01.895029  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetIP
	I1031 18:47:01.897774  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.898121  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:01.898151  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.898318  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:01.898889  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:01.899088  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:01.899174  300343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 18:47:01.899217  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:01.899305  300343 ssh_runner.go:195] Run: cat /version.json
	I1031 18:47:01.899330  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:01.901527  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.901779  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.901856  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:01.901884  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.902101  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:01.902161  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:01.902184  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:01.902262  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:01.902369  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:01.902457  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:01.902542  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:01.902608  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:01.902657  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:01.902784  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:02.036018  300343 ssh_runner.go:195] Run: systemctl --version
	I1031 18:47:02.042136  300343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1031 18:47:02.047749  300343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1031 18:47:02.047832  300343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1031 18:47:02.062898  300343 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1031 18:47:02.062932  300343 start.go:472] detecting cgroup driver to use...
	I1031 18:47:02.063169  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:47:02.084151  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1031 18:47:02.093606  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1031 18:47:02.103089  300343 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1031 18:47:02.103175  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1031 18:47:02.112514  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 18:47:02.122046  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1031 18:47:02.131461  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1031 18:47:02.141166  300343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1031 18:47:02.150930  300343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1031 18:47:02.160824  300343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 18:47:02.169451  300343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 18:47:02.178337  300343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:47:02.291492  300343 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 18:47:02.309376  300343 start.go:472] detecting cgroup driver to use...
	I1031 18:47:02.309469  300343 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 18:47:02.328293  300343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 18:47:02.341136  300343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1031 18:47:02.359081  300343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 18:47:02.372589  300343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:47:02.384757  300343 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 18:47:02.416564  300343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:47:02.429010  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:47:02.445998  300343 ssh_runner.go:195] Run: which cri-dockerd
	I1031 18:47:02.450308  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1031 18:47:02.459091  300343 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1031 18:47:02.474791  300343 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 18:47:02.585777  300343 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 18:47:02.688606  300343 docker.go:561] configuring docker to use "cgroupfs" as cgroup driver...
	I1031 18:47:02.688797  300343 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1031 18:47:02.704203  300343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:47:02.805363  300343 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 18:47:04.234820  300343 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.429415875s)
	I1031 18:47:04.234908  300343 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 18:47:04.341425  300343 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1031 18:47:04.449147  300343 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1031 18:47:04.555494  300343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:47:04.670533  300343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1031 18:47:04.694919  300343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:47:04.818737  300343 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1031 18:47:04.899137  300343 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1031 18:47:04.899214  300343 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1031 18:47:04.905290  300343 start.go:540] Will wait 60s for crictl version
	I1031 18:47:04.905371  300343 ssh_runner.go:195] Run: which crictl
	I1031 18:47:04.909881  300343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1031 18:47:04.981400  300343 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1031 18:47:04.981589  300343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 18:47:05.021031  300343 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1031 18:47:05.053760  300343 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1031 18:47:05.053822  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetIP
	I1031 18:47:05.056895  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:05.057417  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:05.057458  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:05.057662  300343 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1031 18:47:05.062257  300343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 18:47:05.076996  300343 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1031 18:47:01.965051  296974 system_pods.go:86] 7 kube-system pods found
	I1031 18:47:01.965080  296974 system_pods.go:89] "coredns-5644d7b6d9-km2s9" [c0cae151-f060-4074-8a25-8263d20ff0e3] Running
	I1031 18:47:01.965085  296974 system_pods.go:89] "etcd-old-k8s-version-976044" [36f8379a-8b91-40e6-af26-000164de9550] Running
	I1031 18:47:01.965089  296974 system_pods.go:89] "kube-apiserver-old-k8s-version-976044" [947483c2-5505-4fbd-9e0b-9db16d748fba] Running
	I1031 18:47:01.965093  296974 system_pods.go:89] "kube-controller-manager-old-k8s-version-976044" [56189d6b-5bac-4631-a371-dd0fc82c2e22] Running
	I1031 18:47:01.965097  296974 system_pods.go:89] "kube-proxy-4hrrh" [79b995b9-f4a9-4ad8-9e2b-24351ce716be] Running
	I1031 18:47:01.965103  296974 system_pods.go:89] "metrics-server-74d5856cc6-b92hb" [1434154b-4282-4cf1-a3a5-b925ccd76d30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 18:47:01.965109  296974 system_pods.go:89] "storage-provisioner" [02b16282-0aad-460d-8717-8198563a22eb] Running
	I1031 18:47:01.965124  296974 retry.go:31] will retry after 12.736101131s: missing components: kube-scheduler
	I1031 18:47:05.078678  300343 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 18:47:05.078782  300343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 18:47:05.102451  300343 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 18:47:05.102484  300343 docker.go:629] Images already preloaded, skipping extraction
	I1031 18:47:05.102563  300343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 18:47:05.126331  300343 docker.go:699] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1031 18:47:05.126370  300343 cache_images.go:84] Images are preloaded, skipping loading
	I1031 18:47:05.126439  300343 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1031 18:47:05.160081  300343 cni.go:84] Creating CNI manager for ""
	I1031 18:47:05.160117  300343 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1031 18:47:05.160136  300343 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1031 18:47:05.160154  300343 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.86 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-556434 NodeName:newest-cni-556434 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.86"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.86 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1031 18:47:05.160336  300343 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.86
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-556434"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.86
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.86"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 18:47:05.160428  300343 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-556434 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.86
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-556434 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 18:47:05.160526  300343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1031 18:47:05.170381  300343 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 18:47:05.170457  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 18:47:05.179267  300343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (416 bytes)
	I1031 18:47:05.196129  300343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 18:47:05.213717  300343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1031 18:47:05.232078  300343 ssh_runner.go:195] Run: grep 192.168.50.86	control-plane.minikube.internal$ /etc/hosts
	I1031 18:47:05.236051  300343 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.86	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1031 18:47:05.248828  300343 certs.go:56] Setting up /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434 for IP: 192.168.50.86
	I1031 18:47:05.248874  300343 certs.go:190] acquiring lock for shared ca certs: {Name:mka41adcdff2868dcba42b44e1661a4c92f9a14e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:47:05.249081  300343 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key
	I1031 18:47:05.249121  300343 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key
	I1031 18:47:05.249199  300343 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/client.key
	I1031 18:47:05.249282  300343 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/apiserver.key.b5d61b7e
	I1031 18:47:05.249343  300343 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/proxy-client.key
	I1031 18:47:05.249480  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem (1338 bytes)
	W1031 18:47:05.249516  300343 certs.go:433] ignoring /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411_empty.pem, impossibly tiny 0 bytes
	I1031 18:47:05.249533  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca-key.pem (1679 bytes)
	I1031 18:47:05.249568  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/ca.pem (1082 bytes)
	I1031 18:47:05.249597  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/cert.pem (1123 bytes)
	I1031 18:47:05.249630  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/certs/home/jenkins/minikube-integration/17530-243226/.minikube/certs/key.pem (1679 bytes)
	I1031 18:47:05.249690  300343 certs.go:437] found cert: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem (1708 bytes)
	I1031 18:47:05.250406  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 18:47:05.274860  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 18:47:05.298472  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 18:47:05.324188  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/newest-cni-556434/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1031 18:47:05.349665  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 18:47:05.376353  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 18:47:05.400021  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 18:47:05.424249  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1031 18:47:05.448912  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 18:47:05.472722  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/certs/250411.pem --> /usr/share/ca-certificates/250411.pem (1338 bytes)
	I1031 18:47:05.496181  300343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/ssl/certs/2504112.pem --> /usr/share/ca-certificates/2504112.pem (1708 bytes)
	I1031 18:47:05.524061  300343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1031 18:47:05.543594  300343 ssh_runner.go:195] Run: openssl version
	I1031 18:47:05.549590  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 18:47:05.559522  300343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:47:05.564260  300343 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 31 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:47:05.564334  300343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:47:05.569890  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 18:47:05.579469  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250411.pem && ln -fs /usr/share/ca-certificates/250411.pem /etc/ssl/certs/250411.pem"
	I1031 18:47:05.588949  300343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250411.pem
	I1031 18:47:05.593972  300343 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 31 17:43 /usr/share/ca-certificates/250411.pem
	I1031 18:47:05.594060  300343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250411.pem
	I1031 18:47:05.599667  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250411.pem /etc/ssl/certs/51391683.0"
	I1031 18:47:05.610099  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2504112.pem && ln -fs /usr/share/ca-certificates/2504112.pem /etc/ssl/certs/2504112.pem"
	I1031 18:47:05.620099  300343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2504112.pem
	I1031 18:47:05.625259  300343 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 31 17:43 /usr/share/ca-certificates/2504112.pem
	I1031 18:47:05.625337  300343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2504112.pem
	I1031 18:47:05.630976  300343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2504112.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 18:47:05.640381  300343 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1031 18:47:05.644826  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1031 18:47:05.650548  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1031 18:47:05.656342  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1031 18:47:05.661982  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1031 18:47:05.667372  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1031 18:47:05.672820  300343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1031 18:47:05.678775  300343 kubeadm.go:404] StartCluster: {Name:newest-cni-556434 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-556434 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 18:47:05.678965  300343 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 18:47:05.697341  300343 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 18:47:05.707969  300343 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1031 18:47:05.708021  300343 kubeadm.go:636] restartCluster start
	I1031 18:47:05.708072  300343 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 18:47:05.717549  300343 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:05.718180  300343 kubeconfig.go:135] verify returned: extract IP: "newest-cni-556434" does not appear in /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 18:47:05.718455  300343 kubeconfig.go:146] "newest-cni-556434" context is missing from /home/jenkins/minikube-integration/17530-243226/kubeconfig - will repair!
	I1031 18:47:05.718959  300343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:47:05.720411  300343 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 18:47:05.729213  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:05.729283  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:05.740120  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:05.740150  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:05.740207  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:05.752098  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:06.252814  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:06.252905  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:06.264853  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:06.752433  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:06.752535  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:06.764934  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:07.252514  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:07.252615  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:07.263643  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:07.752212  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:07.752303  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:07.764167  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:08.252686  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:08.252796  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:08.264238  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:08.752853  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:08.752958  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:08.764218  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:09.253014  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:09.253113  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:09.264971  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:09.752489  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:09.752569  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:09.764078  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:10.252616  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:10.252708  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:10.263524  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:10.753180  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:10.753269  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:10.765670  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:11.252225  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:11.252324  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:11.265266  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:11.752938  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:11.753062  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:11.764829  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:12.252324  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:12.252429  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:12.264349  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:14.707316  296974 system_pods.go:86] 8 kube-system pods found
	I1031 18:47:14.707351  296974 system_pods.go:89] "coredns-5644d7b6d9-km2s9" [c0cae151-f060-4074-8a25-8263d20ff0e3] Running
	I1031 18:47:14.707357  296974 system_pods.go:89] "etcd-old-k8s-version-976044" [36f8379a-8b91-40e6-af26-000164de9550] Running
	I1031 18:47:14.707361  296974 system_pods.go:89] "kube-apiserver-old-k8s-version-976044" [947483c2-5505-4fbd-9e0b-9db16d748fba] Running
	I1031 18:47:14.707365  296974 system_pods.go:89] "kube-controller-manager-old-k8s-version-976044" [56189d6b-5bac-4631-a371-dd0fc82c2e22] Running
	I1031 18:47:14.707369  296974 system_pods.go:89] "kube-proxy-4hrrh" [79b995b9-f4a9-4ad8-9e2b-24351ce716be] Running
	I1031 18:47:14.707373  296974 system_pods.go:89] "kube-scheduler-old-k8s-version-976044" [060e3dad-2aed-446b-8236-f74763eb435b] Running
	I1031 18:47:14.707380  296974 system_pods.go:89] "metrics-server-74d5856cc6-b92hb" [1434154b-4282-4cf1-a3a5-b925ccd76d30] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 18:47:14.707385  296974 system_pods.go:89] "storage-provisioner" [02b16282-0aad-460d-8717-8198563a22eb] Running
	I1031 18:47:14.707393  296974 system_pods.go:126] duration metric: took 1m3.463658274s to wait for k8s-apps to be running ...
	I1031 18:47:14.707400  296974 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 18:47:14.707447  296974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:47:14.723530  296974 system_svc.go:56] duration metric: took 16.11552ms WaitForService to wait for kubelet.
	I1031 18:47:14.723569  296974 kubeadm.go:581] duration metric: took 1m13.477342619s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 18:47:14.723599  296974 node_conditions.go:102] verifying NodePressure condition ...
	I1031 18:47:14.726983  296974 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 18:47:14.727022  296974 node_conditions.go:123] node cpu capacity is 2
	I1031 18:47:14.727038  296974 node_conditions.go:105] duration metric: took 3.43305ms to run NodePressure ...
	I1031 18:47:14.727054  296974 start.go:228] waiting for startup goroutines ...
	I1031 18:47:14.727063  296974 start.go:233] waiting for cluster config update ...
	I1031 18:47:14.727076  296974 start.go:242] writing updated cluster config ...
	I1031 18:47:14.727390  296974 ssh_runner.go:195] Run: rm -f paused
	I1031 18:47:14.779270  296974 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1031 18:47:14.780959  296974 out.go:177] 
	W1031 18:47:14.782413  296974 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1031 18:47:14.783850  296974 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1031 18:47:14.785760  296974 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-976044" cluster and "default" namespace by default
	I1031 18:47:12.752441  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:12.752545  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:12.764469  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:13.253133  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:13.253242  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:13.265683  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:13.752238  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:13.752360  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:13.763814  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:14.252377  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:14.252457  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:14.264111  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:14.752616  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:14.752703  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:14.764541  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:15.252228  300343 api_server.go:166] Checking apiserver status ...
	I1031 18:47:15.252318  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1031 18:47:15.264380  300343 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:47:15.729912  300343 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1031 18:47:15.729964  300343 kubeadm.go:1128] stopping kube-system containers ...
	I1031 18:47:15.730077  300343 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1031 18:47:15.754212  300343 docker.go:470] Stopping containers: [9059bf73475b 0ee4f9c42747 b17c6752c479 d41816be58c0 27d405d2490b cc852ce2c593 da09023894c7 c08f252351dd 6da741954269 315fb335d439 059648d6925a cae5dfd2ea21 0c6083cfdac0 7cea64946eb3]
	I1031 18:47:15.754295  300343 ssh_runner.go:195] Run: docker stop 9059bf73475b 0ee4f9c42747 b17c6752c479 d41816be58c0 27d405d2490b cc852ce2c593 da09023894c7 c08f252351dd 6da741954269 315fb335d439 059648d6925a cae5dfd2ea21 0c6083cfdac0 7cea64946eb3
	I1031 18:47:15.773973  300343 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 18:47:15.788148  300343 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 18:47:15.796619  300343 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1031 18:47:15.796686  300343 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 18:47:15.804956  300343 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 18:47:15.804985  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:15.922493  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:16.852751  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:17.042603  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:17.128691  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:17.216535  300343 api_server.go:52] waiting for apiserver process to appear ...
	I1031 18:47:17.216633  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:17.231429  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:17.743308  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:18.242971  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:18.743249  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:19.243895  300343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:47:19.284707  300343 api_server.go:72] duration metric: took 2.068170085s to wait for apiserver process to appear ...
	I1031 18:47:19.284746  300343 api_server.go:88] waiting for apiserver healthz status ...
	I1031 18:47:19.284769  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:19.285309  300343 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I1031 18:47:19.285394  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:19.285934  300343 api_server.go:269] stopped: https://192.168.50.86:8443/healthz: Get "https://192.168.50.86:8443/healthz": dial tcp 192.168.50.86:8443: connect: connection refused
	I1031 18:47:19.786700  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:22.504227  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 18:47:22.504261  300343 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 18:47:22.504279  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:22.548911  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 18:47:22.548955  300343 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 18:47:22.786145  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:22.791354  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 18:47:22.791383  300343 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 18:47:23.287039  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:23.292552  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 18:47:23.292584  300343 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 18:47:23.786208  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:23.794121  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1031 18:47:23.794153  300343 api_server.go:103] status: https://192.168.50.86:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1031 18:47:24.286797  300343 api_server.go:253] Checking apiserver healthz at https://192.168.50.86:8443/healthz ...
	I1031 18:47:24.292260  300343 api_server.go:279] https://192.168.50.86:8443/healthz returned 200:
	ok
	I1031 18:47:24.301190  300343 api_server.go:141] control plane version: v1.28.3
	I1031 18:47:24.301224  300343 api_server.go:131] duration metric: took 5.016467631s to wait for apiserver health ...
	I1031 18:47:24.301236  300343 cni.go:84] Creating CNI manager for ""
	I1031 18:47:24.301267  300343 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1031 18:47:24.303589  300343 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 18:47:24.305307  300343 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 18:47:24.316776  300343 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 18:47:24.347992  300343 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 18:47:24.358209  300343 system_pods.go:59] 8 kube-system pods found
	I1031 18:47:24.358246  300343 system_pods.go:61] "coredns-5dd5756b68-hf22b" [fc66035c-6a58-404c-8e2c-32a3c57ff4f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1031 18:47:24.358255  300343 system_pods.go:61] "etcd-newest-cni-556434" [366e27e7-5828-41d4-9c53-c9bb9d44c96f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1031 18:47:24.358263  300343 system_pods.go:61] "kube-apiserver-newest-cni-556434" [5c5ee4fd-82e3-40ce-92aa-780c4cda08b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1031 18:47:24.358271  300343 system_pods.go:61] "kube-controller-manager-newest-cni-556434" [aec9dd9f-b2a0-47a6-947f-e228639ff16c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1031 18:47:24.358280  300343 system_pods.go:61] "kube-proxy-njpln" [3630fe18-17a1-4d33-a4e2-d19a86e21d13] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1031 18:47:24.358293  300343 system_pods.go:61] "kube-scheduler-newest-cni-556434" [ae40412b-a787-451b-93e6-eafc8cc9a786] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1031 18:47:24.358304  300343 system_pods.go:61] "metrics-server-57f55c9bc5-hvvmr" [a5d87636-5abc-4c2b-836c-98a9fb5525ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1031 18:47:24.358318  300343 system_pods.go:61] "storage-provisioner" [86f0f6ce-9896-44b5-83c1-0ccd5fac649b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 18:47:24.358332  300343 system_pods.go:74] duration metric: took 10.312669ms to wait for pod list to return data ...
	I1031 18:47:24.358346  300343 node_conditions.go:102] verifying NodePressure condition ...
	I1031 18:47:24.362701  300343 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 18:47:24.362745  300343 node_conditions.go:123] node cpu capacity is 2
	I1031 18:47:24.362758  300343 node_conditions.go:105] duration metric: took 4.403205ms to run NodePressure ...
	I1031 18:47:24.362782  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:47:24.950612  300343 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 18:47:24.998841  300343 ops.go:34] apiserver oom_adj: -16
	I1031 18:47:24.998868  300343 kubeadm.go:640] restartCluster took 19.290838801s
	I1031 18:47:24.998878  300343 kubeadm.go:406] StartCluster complete in 19.320113761s
	I1031 18:47:24.998904  300343 settings.go:142] acquiring lock: {Name:mk06464896167c6fcd425dd9d6e992b0d80fe7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:47:24.999005  300343 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 18:47:24.999808  300343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17530-243226/kubeconfig: {Name:mke8ab51ed50f1c9f615624c2ece0d97f99c2f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:47:25.000041  300343 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 18:47:25.000148  300343 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1031 18:47:25.000229  300343 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-556434"
	I1031 18:47:25.000253  300343 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-556434"
	I1031 18:47:25.000250  300343 addons.go:69] Setting default-storageclass=true in profile "newest-cni-556434"
	I1031 18:47:25.000277  300343 addons.go:69] Setting dashboard=true in profile "newest-cni-556434"
	I1031 18:47:25.000293  300343 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-556434"
	I1031 18:47:25.000299  300343 addons.go:231] Setting addon dashboard=true in "newest-cni-556434"
	W1031 18:47:25.000310  300343 addons.go:240] addon dashboard should already be in state true
	I1031 18:47:25.000363  300343 host.go:66] Checking if "newest-cni-556434" exists ...
	I1031 18:47:25.000262  300343 config.go:182] Loaded profile config "newest-cni-556434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:47:25.000725  300343 cache.go:107] acquiring lock: {Name:mk4f74b8c745dbd1e702bab2fb7c39bb3e1d6ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:47:25.000751  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.000770  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.000779  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.000806  300343 cache.go:115] /home/jenkins/minikube-integration/17530-243226/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1031 18:47:25.000812  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.000821  300343 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17530-243226/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 104.57µs
	I1031 18:47:25.000836  300343 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1031 18:47:25.000844  300343 cache.go:87] Successfully saved all images to host disk.
	W1031 18:47:25.000265  300343 addons.go:240] addon storage-provisioner should already be in state true
	I1031 18:47:25.000270  300343 addons.go:69] Setting metrics-server=true in profile "newest-cni-556434"
	I1031 18:47:25.000883  300343 addons.go:231] Setting addon metrics-server=true in "newest-cni-556434"
	W1031 18:47:25.000895  300343 addons.go:240] addon metrics-server should already be in state true
	I1031 18:47:25.001119  300343 host.go:66] Checking if "newest-cni-556434" exists ...
	I1031 18:47:25.001147  300343 host.go:66] Checking if "newest-cni-556434" exists ...
	I1031 18:47:25.001525  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.001560  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.001607  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.001643  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.001853  300343 config.go:182] Loaded profile config "newest-cni-556434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:47:25.002193  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.002227  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.006828  300343 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-556434" context rescaled to 1 replicas
	I1031 18:47:25.006892  300343 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.86 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 18:47:25.009525  300343 out.go:177] * Verifying Kubernetes components...
	I1031 18:47:25.018137  300343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:47:25.021211  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44415
	I1031 18:47:25.021367  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I1031 18:47:25.021683  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I1031 18:47:25.021867  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.021986  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.022151  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.022353  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.022375  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.022546  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.022566  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.022608  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.022626  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.022713  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.023020  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I1031 18:47:25.023050  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.023125  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.023281  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.023323  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.023615  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.023679  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.024123  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.024288  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:47:25.024660  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.024679  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.025145  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.025392  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:47:25.026939  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.026989  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.028439  300343 addons.go:231] Setting addon default-storageclass=true in "newest-cni-556434"
	W1031 18:47:25.028461  300343 addons.go:240] addon default-storageclass should already be in state true
	I1031 18:47:25.028492  300343 host.go:66] Checking if "newest-cni-556434" exists ...
	I1031 18:47:25.028890  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.028916  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.029530  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I1031 18:47:25.030064  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.030540  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.030576  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.030956  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.031485  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.031550  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.044314  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I1031 18:47:25.044603  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I1031 18:47:25.044868  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.045425  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.045457  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.045877  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.046125  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:47:25.046245  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.046889  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.046917  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.047337  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.047685  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:47:25.048182  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:25.050265  300343 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1031 18:47:25.051879  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I1031 18:47:25.049928  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:25.050733  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I1031 18:47:25.052491  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.053419  300343 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1031 18:47:25.054820  300343 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1031 18:47:25.054844  300343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1031 18:47:25.054908  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:25.056213  300343 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1031 18:47:25.053859  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.053900  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.056166  300343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I1031 18:47:25.058425  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.058575  300343 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1031 18:47:25.058588  300343 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1031 18:47:25.058612  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:25.058618  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.059321  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.059342  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.059420  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.059474  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:25.059490  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.059579  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:25.059623  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:25.059799  300343 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 18:47:25.059837  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHHostname
	I1031 18:47:25.059843  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:25.059842  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.060050  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:25.060242  300343 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:47:25.060298  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:25.060343  300343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:47:25.060379  300343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:47:25.063009  300343 main.go:141] libmachine: Using API Version  1
	I1031 18:47:25.063031  300343 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:47:25.063104  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.063381  300343 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:47:25.063569  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetState
	I1031 18:47:25.063588  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:25.063610  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.063633  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.063942  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:25.064124  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:25.064175  300343 main.go:141] libmachine: (newest-cni-556434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:f5:a3", ip: ""} in network mk-newest-cni-556434: {Iface:virbr2 ExpiryTime:2023-10-31 19:46:54 +0000 UTC Type:0 Mac:52:54:00:0b:f5:a3 Iaid: IPaddr:192.168.50.86 Prefix:24 Hostname:newest-cni-556434 Clientid:01:52:54:00:0b:f5:a3}
	I1031 18:47:25.064198  300343 main.go:141] libmachine: (newest-cni-556434) DBG | domain newest-cni-556434 has defined IP address 192.168.50.86 and MAC address 52:54:00:0b:f5:a3 in network mk-newest-cni-556434
	I1031 18:47:25.064262  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:25.064371  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:25.064412  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHPort
	I1031 18:47:25.064512  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHKeyPath
	I1031 18:47:25.064636  300343 main.go:141] libmachine: (newest-cni-556434) Calling .GetSSHUsername
	I1031 18:47:25.064732  300343 sshutil.go:53] new ssh client: &{IP:192.168.50.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/newest-cni-556434/id_rsa Username:docker}
	I1031 18:47:25.066232  300343 main.go:141] libmachine: (newest-cni-556434) Calling .DriverName
	I1031 18:47:25.068858  300343 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-31 18:40:09 UTC, ends at Tue 2023-10-31 18:47:26 UTC. --
	Oct 31 18:46:21 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:21.532794034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:46:21 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:21.533036713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:46:22 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:46:22.024092043Z" level=info msg="ignoring event" container=dafd086cfd83955259c4d79479435d468c0a1e291bb156be26d9e38b877801c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 31 18:46:22 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:22.025308449Z" level=info msg="shim disconnected" id=dafd086cfd83955259c4d79479435d468c0a1e291bb156be26d9e38b877801c1 namespace=moby
	Oct 31 18:46:22 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:22.025355610Z" level=warning msg="cleaning up after shim disconnected" id=dafd086cfd83955259c4d79479435d468c0a1e291bb156be26d9e38b877801c1 namespace=moby
	Oct 31 18:46:22 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:22.025363351Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.098837710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.099226820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.099833773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.099883297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:46:44.494684459Z" level=info msg="ignoring event" container=4ce318f892dcd9fe77949bb01b2c1a7af9742563021c43f0fd96de0ab06a946f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.495799165Z" level=info msg="shim disconnected" id=4ce318f892dcd9fe77949bb01b2c1a7af9742563021c43f0fd96de0ab06a946f namespace=moby
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.496628120Z" level=warning msg="cleaning up after shim disconnected" id=4ce318f892dcd9fe77949bb01b2c1a7af9742563021c43f0fd96de0ab06a946f namespace=moby
	Oct 31 18:46:44 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:46:44.496644890Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 31 18:46:47 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:46:47.020220837Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 31 18:46:47 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:46:47.020554500Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 31 18:46:47 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:46:47.023522210Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.092846650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.094070639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.094093900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.094104601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1082]: time="2023-10-31T18:47:11.483869314Z" level=info msg="ignoring event" container=971b817da9281dea444c7bff898ff5d9c4fdcb87f4d078d0e1b3d16f30fb9d7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.484735370Z" level=info msg="shim disconnected" id=971b817da9281dea444c7bff898ff5d9c4fdcb87f4d078d0e1b3d16f30fb9d7b namespace=moby
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.485302940Z" level=warning msg="cleaning up after shim disconnected" id=971b817da9281dea444c7bff898ff5d9c4fdcb87f4d078d0e1b3d16f30fb9d7b namespace=moby
	Oct 31 18:47:11 old-k8s-version-976044 dockerd[1088]: time="2023-10-31T18:47:11.485330895Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	971b817da928   a90209bb39e3             "nginx -g 'daemon of…"   15 seconds ago       Exited (1) 14 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard_a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c_3
	db0c25cbbdc1   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-2t7z6_kubernetes-dashboard_ecc68842-0ada-4ec4-b84e-03463aa49429_0
	e77bd12ede0c   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-2t7z6_kubernetes-dashboard_ecc68842-0ada-4ec4-b84e-03463aa49429_0
	740e55ea5410   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard_a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c_0
	57549acdeb14   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-b92hb_kube-system_1434154b-4282-4cf1-a3a5-b925ccd76d30_0
	da50445a8f26   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_02b16282-0aad-460d-8717-8198563a22eb_0
	17bd28dc31de   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_02b16282-0aad-460d-8717-8198563a22eb_0
	5e717ad6b3f9   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-km2s9_kube-system_c0cae151-f060-4074-8a25-8263d20ff0e3_0
	b2ea601faedb   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-km2s9_kube-system_c0cae151-f060-4074-8a25-8263d20ff0e3_0
	92e4a794c754   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-4hrrh_kube-system_79b995b9-f4a9-4ad8-9e2b-24351ce716be_0
	2dde919208dd   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-4hrrh_kube-system_79b995b9-f4a9-4ad8-9e2b-24351ce716be_0
	42363790759f   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-976044_kube-system_57157c6c7cbbf95a3173ed4ca519a5ad_0
	eeeecd7bc447   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-976044_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	8c0c6ee34c11   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-976044_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	89d2e8211ac0   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-976044_kube-system_f9725f169069135e0a2b8eb5fc8f9181_0
	54ca9944544b   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-976044_kube-system_57157c6c7cbbf95a3173ed4ca519a5ad_0
	6f73f574c955   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-976044_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	00013ff6844e   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-976044_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	69a5aed2912c   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-976044_kube-system_f9725f169069135e0a2b8eb5fc8f9181_0
	time="2023-10-31T18:47:26Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [5e717ad6b3f9] <==
	* .:53
	2023-10-31T18:46:04.103Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-31T18:46:04.103Z [INFO] CoreDNS-1.6.2
	2023-10-31T18:46:04.103Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-31T18:46:38.766Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-10-31T18:46:38.776Z [INFO] 127.0.0.1:43605 - 61758 "HINFO IN 2233912396121123873.1415493327063687315. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010333731s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-976044
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-976044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71321dec093a6a5f401a04c4a033d482891db45
	                    minikube.k8s.io/name=old-k8s-version-976044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_31T18_45_46_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 Oct 2023 18:45:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 Oct 2023 18:46:41 +0000   Tue, 31 Oct 2023 18:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 Oct 2023 18:46:41 +0000   Tue, 31 Oct 2023 18:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 Oct 2023 18:46:41 +0000   Tue, 31 Oct 2023 18:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 Oct 2023 18:46:41 +0000   Tue, 31 Oct 2023 18:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    old-k8s-version-976044
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 5d2934832b10479ebd04179407ea8a0f
	 System UUID:                5d293483-2b10-479e-bd04-179407ea8a0f
	 Boot ID:                    0af46627-d420-4cdd-a21e-94dca9cfe5c6
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-km2s9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-976044                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                kube-apiserver-old-k8s-version-976044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                kube-controller-manager-old-k8s-version-976044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                kube-proxy-4hrrh                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                kube-scheduler-old-k8s-version-976044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                metrics-server-74d5856cc6-b92hb                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-bc8f7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-2t7z6             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet, old-k8s-version-976044     Node old-k8s-version-976044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet, old-k8s-version-976044     Node old-k8s-version-976044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet, old-k8s-version-976044     Node old-k8s-version-976044 status is now: NodeHasSufficientPID
	  Normal  Starting                 84s                  kube-proxy, old-k8s-version-976044  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081377] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.369395] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.973676] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144300] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.452135] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.043069] systemd-fstab-generator[515]: Ignoring "noauto" for root device
	[  +0.123792] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.300255] systemd-fstab-generator[793]: Ignoring "noauto" for root device
	[  +0.317770] systemd-fstab-generator[830]: Ignoring "noauto" for root device
	[  +0.133207] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +0.169046] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +6.265556] systemd-fstab-generator[1073]: Ignoring "noauto" for root device
	[  +3.542081] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.035244] systemd-fstab-generator[1489]: Ignoring "noauto" for root device
	[  +0.534239] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.162973] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct31 18:41] kauditd_printk_skb: 6 callbacks suppressed
	[Oct31 18:45] hrtimer: interrupt took 2200345 ns
	[ +22.421289] systemd-fstab-generator[5381]: Ignoring "noauto" for root device
	[Oct31 18:46] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [42363790759f] <==
	* 2023-10-31 18:45:37.634572 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-31 18:45:37.634797 I | etcdserver/membership: added member b6c76b3131c1024 [https://192.168.39.16:2380] to cluster cad58bbf0f3daddf
	2023-10-31 18:45:38.062374 I | raft: b6c76b3131c1024 is starting a new election at term 1
	2023-10-31 18:45:38.062430 I | raft: b6c76b3131c1024 became candidate at term 2
	2023-10-31 18:45:38.062443 I | raft: b6c76b3131c1024 received MsgVoteResp from b6c76b3131c1024 at term 2
	2023-10-31 18:45:38.062453 I | raft: b6c76b3131c1024 became leader at term 2
	2023-10-31 18:45:38.062457 I | raft: raft.node: b6c76b3131c1024 elected leader b6c76b3131c1024 at term 2
	2023-10-31 18:45:38.063200 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-31 18:45:38.063830 I | etcdserver: published {Name:old-k8s-version-976044 ClientURLs:[https://192.168.39.16:2379]} to cluster cad58bbf0f3daddf
	2023-10-31 18:45:38.063977 I | embed: ready to serve client requests
	2023-10-31 18:45:38.064413 I | embed: ready to serve client requests
	2023-10-31 18:45:38.065746 I | embed: serving client requests on 192.168.39.16:2379
	2023-10-31 18:45:38.066662 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-31 18:45:38.066758 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-31 18:45:38.068102 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-31 18:45:44.817891 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-scheduler\" " with result "range_response_count:0 size:5" took too long (274.439824ms) to execute
	2023-10-31 18:45:44.818090 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-976044.179345881142f38a\" " with result "range_response_count:0 size:5" took too long (170.343058ms) to execute
	2023-10-31 18:45:44.818362 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (262.607744ms) to execute
	2023-10-31 18:45:45.033606 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:5" took too long (133.766597ms) to execute
	2023-10-31 18:45:49.871724 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (260.60096ms) to execute
	2023-10-31 18:45:49.872008 W | etcdserver: request "header:<ID:1163208015943736442 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/etcd-old-k8s-version-976044.179345889fdc22d2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-old-k8s-version-976044.179345889fdc22d2\" value_size:361 lease:1163208015943736317 >> failure:<>>" with result "size:16" took too long (107.620177ms) to execute
	2023-10-31 18:46:05.065063 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-km2s9\" " with result "range_response_count:1 size:1694" took too long (100.230283ms) to execute
	2023-10-31 18:46:05.065198 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-84b68f675b.1793458eea245603\" " with result "range_response_count:1 size:675" took too long (102.181404ms) to execute
	2023-10-31 18:46:11.166667 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-km2s9\" " with result "range_response_count:1 size:1769" took too long (117.261566ms) to execute
	2023-10-31 18:46:19.142338 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-74d5856cc6-b92hb.1793458f30394c74\" " with result "range_response_count:1 size:533" took too long (121.212656ms) to execute
	
	* 
	* ==> kernel <==
	*  18:47:26 up 7 min,  0 users,  load average: 0.56, 0.45, 0.21
	Linux old-k8s-version-976044 5.10.57 #1 SMP Fri Oct 27 01:16:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [89d2e8211ac0] <==
	* I1031 18:45:42.388569       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1031 18:45:42.407731       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1031 18:45:42.408094       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1031 18:45:44.163419       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 18:45:44.443548       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 18:45:44.855320       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	W1031 18:45:45.209501       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.39.16]
	I1031 18:45:45.210738       1 controller.go:606] quota admission added evaluator for: endpoints
	I1031 18:45:45.743885       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1031 18:45:46.524261       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1031 18:45:46.817078       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1031 18:46:00.962878       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1031 18:46:01.019965       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1031 18:46:01.488017       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	E1031 18:46:04.843895       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	I1031 18:46:06.025233       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 18:46:06.025353       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 18:46:06.025488       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 18:46:06.025502       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1031 18:47:06.025836       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1031 18:47:06.025952       1 handler_proxy.go:99] no RequestInfo found in the context
	E1031 18:47:06.025991       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1031 18:47:06.025998       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8c0c6ee34c11] <==
	* I1031 18:46:04.565209       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.567519       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.567914       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7a5fd726-3302-4862-9d09-122a958561f2", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.591354       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.610179       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.610833       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7a5fd726-3302-4862-9d09-122a958561f2", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.618364       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.618825       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.635516       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.641480       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.655222       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.655322       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7a5fd726-3302-4862-9d09-122a958561f2", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.659615       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.660084       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.674932       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1031 18:46:04.676662       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.676726       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:04.676744       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7a5fd726-3302-4862-9d09-122a958561f2", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1031 18:46:05.074850       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"3cec54cf-923e-42de-9c3d-bcbf22caf42e", APIVersion:"apps/v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-b92hb
	I1031 18:46:05.875015       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"7a5fd726-3302-4862-9d09-122a958561f2", APIVersion:"apps/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-bc8f7
	I1031 18:46:05.891666       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b5bc156c-c3b5-44c3-b1a3-6f763f39dc3c", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-2t7z6
	E1031 18:46:31.697620       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 18:46:33.449250       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1031 18:47:01.949502       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1031 18:47:05.451712       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [92e4a794c754] <==
	* W1031 18:46:02.860325       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1031 18:46:02.871677       1 node.go:135] Successfully retrieved node IP: 192.168.39.16
	I1031 18:46:02.871745       1 server_others.go:149] Using iptables Proxier.
	I1031 18:46:02.873352       1 server.go:529] Version: v1.16.0
	I1031 18:46:02.877314       1 config.go:131] Starting endpoints config controller
	I1031 18:46:02.877473       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1031 18:46:02.883268       1 config.go:313] Starting service config controller
	I1031 18:46:02.883300       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1031 18:46:02.978267       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1031 18:46:02.983693       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [eeeecd7bc447] <==
	* I1031 18:45:41.531813       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1031 18:45:41.577632       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 18:45:41.581554       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 18:45:41.584236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 18:45:41.585692       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 18:45:41.585818       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 18:45:41.585928       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 18:45:41.600006       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 18:45:41.600191       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 18:45:41.613389       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 18:45:41.613468       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 18:45:41.613859       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 18:45:42.579472       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1031 18:45:42.583348       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1031 18:45:42.586877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1031 18:45:42.587500       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1031 18:45:42.605329       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1031 18:45:42.605689       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1031 18:45:42.606942       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1031 18:45:42.607992       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1031 18:45:42.614563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1031 18:45:42.616996       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1031 18:45:42.618532       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1031 18:46:01.037967       1 factory.go:585] pod is already present in the activeQ
	E1031 18:46:01.086672       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-31 18:40:09 UTC, ends at Tue 2023-10-31 18:47:26 UTC. --
	Oct 31 18:46:20 old-k8s-version-976044 kubelet[5387]: E1031 18:46:20.365693    5387 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 31 18:46:20 old-k8s-version-976044 kubelet[5387]: E1031 18:46:20.365779    5387 pod_workers.go:191] Error syncing pod 1434154b-4282-4cf1-a3a5-b925ccd76d30 ("metrics-server-74d5856cc6-b92hb_kube-system(1434154b-4282-4cf1-a3a5-b925ccd76d30)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 31 18:46:21 old-k8s-version-976044 kubelet[5387]: W1031 18:46:21.063488    5387 container.go:409] Failed to create summary reader for "/kubepods/besteffort/poda6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c/c0cfd4273bbe76b3d1b92d039adf9c5e959d46a9fd020c25f623e02f7b07d575": none of the resources are being tracked.
	Oct 31 18:46:21 old-k8s-version-976044 kubelet[5387]: W1031 18:46:21.418078    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:46:22 old-k8s-version-976044 kubelet[5387]: W1031 18:46:22.434558    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:46:22 old-k8s-version-976044 kubelet[5387]: E1031 18:46:22.443867    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:46:23 old-k8s-version-976044 kubelet[5387]: W1031 18:46:23.454418    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:46:23 old-k8s-version-976044 kubelet[5387]: E1031 18:46:23.459827    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:46:28 old-k8s-version-976044 kubelet[5387]: E1031 18:46:28.626457    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:46:35 old-k8s-version-976044 kubelet[5387]: E1031 18:46:35.003727    5387 pod_workers.go:191] Error syncing pod 1434154b-4282-4cf1-a3a5-b925ccd76d30 ("metrics-server-74d5856cc6-b92hb_kube-system(1434154b-4282-4cf1-a3a5-b925ccd76d30)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 18:46:44 old-k8s-version-976044 kubelet[5387]: W1031 18:46:44.611418    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:46:44 old-k8s-version-976044 kubelet[5387]: E1031 18:46:44.618849    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:46:45 old-k8s-version-976044 kubelet[5387]: W1031 18:46:45.627625    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:46:47 old-k8s-version-976044 kubelet[5387]: E1031 18:46:47.024469    5387 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 31 18:46:47 old-k8s-version-976044 kubelet[5387]: E1031 18:46:47.024524    5387 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 31 18:46:47 old-k8s-version-976044 kubelet[5387]: E1031 18:46:47.024564    5387 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 31 18:46:47 old-k8s-version-976044 kubelet[5387]: E1031 18:46:47.024591    5387 pod_workers.go:191] Error syncing pod 1434154b-4282-4cf1-a3a5-b925ccd76d30 ("metrics-server-74d5856cc6-b92hb_kube-system(1434154b-4282-4cf1-a3a5-b925ccd76d30)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 31 18:46:48 old-k8s-version-976044 kubelet[5387]: E1031 18:46:48.626376    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:46:59 old-k8s-version-976044 kubelet[5387]: E1031 18:46:59.000317    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:47:00 old-k8s-version-976044 kubelet[5387]: E1031 18:47:00.001190    5387 pod_workers.go:191] Error syncing pod 1434154b-4282-4cf1-a3a5-b925ccd76d30 ("metrics-server-74d5856cc6-b92hb_kube-system(1434154b-4282-4cf1-a3a5-b925ccd76d30)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 18:47:11 old-k8s-version-976044 kubelet[5387]: W1031 18:47:11.842631    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:47:11 old-k8s-version-976044 kubelet[5387]: E1031 18:47:11.851260    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	Oct 31 18:47:12 old-k8s-version-976044 kubelet[5387]: E1031 18:47:12.001342    5387 pod_workers.go:191] Error syncing pod 1434154b-4282-4cf1-a3a5-b925ccd76d30 ("metrics-server-74d5856cc6-b92hb_kube-system(1434154b-4282-4cf1-a3a5-b925ccd76d30)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 31 18:47:12 old-k8s-version-976044 kubelet[5387]: W1031 18:47:12.856719    5387 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bc8f7 through plugin: invalid network status for
	Oct 31 18:47:18 old-k8s-version-976044 kubelet[5387]: E1031 18:47:18.628607    5387 pod_workers.go:191] Error syncing pod a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c ("dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bc8f7_kubernetes-dashboard(a6d2f858-2c5d-4ad7-bfae-3a8ba7c8597c)"
	
	* 
	* ==> kubernetes-dashboard [db0c25cbbdc1] <==
	* 2023/10/31 18:46:14 Starting overwatch
	2023/10/31 18:46:14 Using namespace: kubernetes-dashboard
	2023/10/31 18:46:14 Using in-cluster config to connect to apiserver
	2023/10/31 18:46:14 Using secret token for csrf signing
	2023/10/31 18:46:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/31 18:46:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/31 18:46:14 Successful initial request to the apiserver, version: v1.16.0
	2023/10/31 18:46:14 Generating JWE encryption key
	2023/10/31 18:46:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/31 18:46:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/31 18:46:15 Initializing JWE encryption key from synchronized object
	2023/10/31 18:46:15 Creating in-cluster Sidecar client
	2023/10/31 18:46:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/31 18:46:15 Serving insecurely on HTTP port: 9090
	2023/10/31 18:46:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/31 18:47:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [da50445a8f26] <==
	* I1031 18:46:04.919209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 18:46:04.988657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 18:46:04.989623       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 18:46:05.127449       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 18:46:05.128382       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-976044_d120b1e2-0c48-4e64-9850-93217c63d7c7!
	I1031 18:46:05.137395       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bf0bd6d9-5e01-4f8a-815e-a3992d53f16f", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-976044_d120b1e2-0c48-4e64-9850-93217c63d7c7 became leader
	I1031 18:46:05.233303       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-976044_d120b1e2-0c48-4e64-9850-93217c63d7c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-976044 -n old-k8s-version-976044
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-976044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-b92hb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-976044 describe pod metrics-server-74d5856cc6-b92hb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-976044 describe pod metrics-server-74d5856cc6-b92hb: exit status 1 (79.679571ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-b92hb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-976044 describe pod metrics-server-74d5856cc6-b92hb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.16s)

                                                
                                    

Test pass (282/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.04
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.3/json-events 13.63
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.16
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.61
20 TestOffline 103
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 159.48
27 TestAddons/parallel/Registry 16.59
28 TestAddons/parallel/Ingress 23.06
29 TestAddons/parallel/InspektorGadget 10.73
30 TestAddons/parallel/MetricsServer 5.96
31 TestAddons/parallel/HelmTiller 12.75
33 TestAddons/parallel/CSI 95.7
34 TestAddons/parallel/Headlamp 16.16
35 TestAddons/parallel/CloudSpanner 5.5
36 TestAddons/parallel/LocalPath 55.13
37 TestAddons/parallel/NvidiaDevicePlugin 5.48
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/StoppedEnableDisable 13.47
42 TestCertOptions 84.37
43 TestCertExpiration 295.5
44 TestDockerFlags 64.66
45 TestForceSystemdFlag 80.4
46 TestForceSystemdEnv 80.09
48 TestKVMDriverInstallOrUpdate 3.07
52 TestErrorSpam/setup 49.9
53 TestErrorSpam/start 0.42
54 TestErrorSpam/status 0.84
55 TestErrorSpam/pause 1.22
56 TestErrorSpam/unpause 1.34
57 TestErrorSpam/stop 12.6
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 63.56
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 38.59
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.83
69 TestFunctional/serial/CacheCmd/cache/add_local 1.83
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
77 TestFunctional/serial/ExtraConfig 38.96
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.17
80 TestFunctional/serial/LogsFileCmd 1.14
81 TestFunctional/serial/InvalidService 5.22
83 TestFunctional/parallel/ConfigCmd 0.52
84 TestFunctional/parallel/DashboardCmd 25.73
85 TestFunctional/parallel/DryRun 0.38
86 TestFunctional/parallel/InternationalLanguage 0.2
87 TestFunctional/parallel/StatusCmd 1.39
91 TestFunctional/parallel/ServiceCmdConnect 8.61
92 TestFunctional/parallel/AddonsCmd 0.18
93 TestFunctional/parallel/PersistentVolumeClaim 49.19
95 TestFunctional/parallel/SSHCmd 0.48
96 TestFunctional/parallel/CpCmd 1.16
97 TestFunctional/parallel/MySQL 31.42
98 TestFunctional/parallel/FileSync 0.23
99 TestFunctional/parallel/CertSync 1.65
103 TestFunctional/parallel/NodeLabels 0.08
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
107 TestFunctional/parallel/License 0.83
108 TestFunctional/parallel/ServiceCmd/DeployApp 16.26
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
110 TestFunctional/parallel/ProfileCmd/profile_list 0.42
111 TestFunctional/parallel/MountCmd/any-port 14.12
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
113 TestFunctional/parallel/Version/short 0.07
114 TestFunctional/parallel/Version/components 0.65
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.9
120 TestFunctional/parallel/ImageCommands/Setup 1.93
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.1
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.62
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.62
124 TestFunctional/parallel/MountCmd/specific-port 2.07
125 TestFunctional/parallel/ServiceCmd/List 0.56
126 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.73
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
129 TestFunctional/parallel/ServiceCmd/Format 0.37
130 TestFunctional/parallel/ServiceCmd/URL 0.35
131 TestFunctional/parallel/DockerEnv/bash 1.04
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.58
145 TestFunctional/parallel/ImageCommands/ImageRemove 1.1
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.52
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.37
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.01
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 335.19
154 TestImageBuild/serial/Setup 52.46
155 TestImageBuild/serial/NormalBuild 2.27
156 TestImageBuild/serial/BuildWithBuildArg 1.37
157 TestImageBuild/serial/BuildWithDockerIgnore 0.41
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.32
161 TestIngressAddonLegacy/StartLegacyK8sCluster 90.46
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.54
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.85
168 TestJSONOutput/start/Command 63.66
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.58
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.56
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.43
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.24
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 104.69
200 TestMountStart/serial/StartWithMountFirst 30.49
201 TestMountStart/serial/VerifyMountFirst 0.44
202 TestMountStart/serial/StartWithMountSecond 30.3
203 TestMountStart/serial/VerifyMountSecond 0.41
204 TestMountStart/serial/DeleteFirst 0.72
205 TestMountStart/serial/VerifyMountPostDelete 0.43
206 TestMountStart/serial/Stop 2.11
207 TestMountStart/serial/RestartStopped 25.73
208 TestMountStart/serial/VerifyMountPostStop 0.44
215 TestMultiNode/serial/ProfileList 0.26
219 TestMultiNode/serial/RestartKeepsNodes 165.41
220 TestMultiNode/serial/DeleteNode 1.83
221 TestMultiNode/serial/StopMultiNode 25.68
222 TestMultiNode/serial/RestartMultiNode 104.03
223 TestMultiNode/serial/ValidateNameConflict 52.68
228 TestPreload 181.3
230 TestScheduledStopUnix 124.21
231 TestSkaffold 142.6
234 TestRunningBinaryUpgrade 199.16
236 TestKubernetesUpgrade 218.89
249 TestStoppedBinaryUpgrade/Setup 1.59
250 TestStoppedBinaryUpgrade/Upgrade 207.53
252 TestPause/serial/Start 75.4
253 TestPause/serial/SecondStartNoReconfiguration 48.13
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 53.48
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.48
265 TestPause/serial/Pause 0.71
266 TestPause/serial/VerifyStatus 0.31
267 TestPause/serial/Unpause 0.56
268 TestPause/serial/PauseAgain 0.79
269 TestPause/serial/DeletePaused 1.12
270 TestPause/serial/VerifyDeletedResources 0.3
271 TestNoKubernetes/serial/StartWithStopK8s 69.38
272 TestNoKubernetes/serial/Start 68.83
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
274 TestNoKubernetes/serial/ProfileList 1.37
275 TestNoKubernetes/serial/Stop 3.13
276 TestNoKubernetes/serial/StartNoArgs 41.37
277 TestNetworkPlugins/group/auto/Start 113.53
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
279 TestNetworkPlugins/group/kindnet/Start 107.54
280 TestNetworkPlugins/group/auto/KubeletFlags 0.24
281 TestNetworkPlugins/group/auto/NetCatPod 12.39
282 TestNetworkPlugins/group/auto/DNS 0.18
283 TestNetworkPlugins/group/auto/Localhost 0.16
284 TestNetworkPlugins/group/auto/HairPin 0.16
285 TestNetworkPlugins/group/calico/Start 102.55
286 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
287 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
288 TestNetworkPlugins/group/kindnet/NetCatPod 12.4
289 TestNetworkPlugins/group/custom-flannel/Start 98.93
290 TestNetworkPlugins/group/kindnet/DNS 0.19
291 TestNetworkPlugins/group/kindnet/Localhost 0.16
292 TestNetworkPlugins/group/kindnet/HairPin 0.15
293 TestNetworkPlugins/group/false/Start 94.69
294 TestNetworkPlugins/group/calico/ControllerPod 5.04
295 TestNetworkPlugins/group/calico/KubeletFlags 0.27
296 TestNetworkPlugins/group/calico/NetCatPod 13.62
297 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
298 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.48
299 TestNetworkPlugins/group/enable-default-cni/Start 75.81
300 TestNetworkPlugins/group/calico/DNS 0.22
301 TestNetworkPlugins/group/calico/Localhost 0.17
302 TestNetworkPlugins/group/calico/HairPin 0.18
303 TestNetworkPlugins/group/custom-flannel/DNS 0.25
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
306 TestNetworkPlugins/group/false/KubeletFlags 0.3
307 TestNetworkPlugins/group/false/NetCatPod 13.46
308 TestNetworkPlugins/group/flannel/Start 88.73
309 TestNetworkPlugins/group/bridge/Start 110.86
310 TestNetworkPlugins/group/false/DNS 0.24
311 TestNetworkPlugins/group/false/Localhost 0.19
312 TestNetworkPlugins/group/false/HairPin 0.19
313 TestNetworkPlugins/group/kubenet/Start 117.65
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.44
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
320 TestStartStop/group/old-k8s-version/serial/FirstStart 164.93
321 TestNetworkPlugins/group/flannel/ControllerPod 5.02
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
323 TestNetworkPlugins/group/flannel/NetCatPod 16.41
324 TestNetworkPlugins/group/flannel/DNS 0.22
325 TestNetworkPlugins/group/flannel/Localhost 0.2
326 TestNetworkPlugins/group/flannel/HairPin 0.19
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
328 TestNetworkPlugins/group/bridge/NetCatPod 12.42
329 TestNetworkPlugins/group/bridge/DNS 0.28
330 TestNetworkPlugins/group/bridge/Localhost 0.24
331 TestNetworkPlugins/group/bridge/HairPin 0.21
333 TestStartStop/group/no-preload/serial/FirstStart 89.45
334 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
335 TestNetworkPlugins/group/kubenet/NetCatPod 12.52
337 TestStartStop/group/embed-certs/serial/FirstStart 98.21
338 TestNetworkPlugins/group/kubenet/DNS 0.18
339 TestNetworkPlugins/group/kubenet/Localhost 0.14
340 TestNetworkPlugins/group/kubenet/HairPin 0.16
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.67
343 TestStartStop/group/no-preload/serial/DeployApp 11.53
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.4
345 TestStartStop/group/no-preload/serial/Stop 13.16
346 TestStartStop/group/embed-certs/serial/DeployApp 10.54
347 TestStartStop/group/old-k8s-version/serial/DeployApp 10.55
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
349 TestStartStop/group/no-preload/serial/SecondStart 337.2
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
352 TestStartStop/group/embed-certs/serial/Stop 13.19
353 TestStartStop/group/old-k8s-version/serial/Stop 13.29
354 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.4
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
356 TestStartStop/group/embed-certs/serial/SecondStart 332.33
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
358 TestStartStop/group/old-k8s-version/serial/SecondStart 459.39
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 355.01
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.02
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.03
366 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
367 TestStartStop/group/no-preload/serial/Pause 3.23
369 TestStartStop/group/newest-cni/serial/FirstStart 74.11
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
372 TestStartStop/group/embed-certs/serial/Pause 3.21
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 23.03
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.98
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
379 TestStartStop/group/newest-cni/serial/Stop 13.14
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
381 TestStartStop/group/newest-cni/serial/SecondStart 47.34
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
385 TestStartStop/group/old-k8s-version/serial/Pause 2.82
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
389 TestStartStop/group/newest-cni/serial/Pause 2.97
x
+
TestDownloadOnly/v1.16.0/json-events (22.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-876817 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-876817 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (22.036878607s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-876817
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-876817: exit status 85 (84.621886ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-876817 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:37 UTC |          |
	|         | -p download-only-876817        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:37:06
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:37:06.880638  250423 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:37:06.880828  250423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:37:06.880840  250423 out.go:309] Setting ErrFile to fd 2...
	I1031 17:37:06.880845  250423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:37:06.881073  250423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	W1031 17:37:06.881213  250423 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17530-243226/.minikube/config/config.json: open /home/jenkins/minikube-integration/17530-243226/.minikube/config/config.json: no such file or directory
	I1031 17:37:06.881887  250423 out.go:303] Setting JSON to true
	I1031 17:37:06.883109  250423 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4737,"bootTime":1698769090,"procs":427,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:37:06.883186  250423 start.go:138] virtualization: kvm guest
	I1031 17:37:06.885948  250423 out.go:97] [download-only-876817] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:37:06.887719  250423 out.go:169] MINIKUBE_LOCATION=17530
	W1031 17:37:06.886152  250423 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 17:37:06.886206  250423 notify.go:220] Checking for updates...
	I1031 17:37:06.891035  250423 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:37:06.892652  250423 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:37:06.894443  250423 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:37:06.895999  250423 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 17:37:06.899001  250423 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 17:37:06.899243  250423 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:37:07.014326  250423 out.go:97] Using the kvm2 driver based on user configuration
	I1031 17:37:07.014403  250423 start.go:298] selected driver: kvm2
	I1031 17:37:07.014432  250423 start.go:902] validating driver "kvm2" against <nil>
	I1031 17:37:07.014827  250423 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:37:07.014996  250423 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:37:07.031202  250423 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:37:07.031281  250423 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1031 17:37:07.031772  250423 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1031 17:37:07.031927  250423 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 17:37:07.032007  250423 cni.go:84] Creating CNI manager for ""
	I1031 17:37:07.032025  250423 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1031 17:37:07.032035  250423 start_flags.go:323] config:
	{Name:download-only-876817 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-876817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:37:07.032241  250423 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:37:07.034621  250423 out.go:97] Downloading VM boot image ...
	I1031 17:37:07.034660  250423 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/amd64/minikube-v1.32.0-amd64.iso
	I1031 17:37:15.704065  250423 out.go:97] Starting control plane node download-only-876817 in cluster download-only-876817
	I1031 17:37:15.704114  250423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 17:37:15.800548  250423 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1031 17:37:15.800583  250423 cache.go:56] Caching tarball of preloaded images
	I1031 17:37:15.800761  250423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 17:37:15.802953  250423 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 17:37:15.802989  250423 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:37:15.909056  250423 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-876817"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (13.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-876817 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-876817 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (13.62782461s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (13.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-876817
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-876817: exit status 85 (79.26829ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-876817 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:37 UTC |          |
	|         | -p download-only-876817        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-876817 | jenkins | v1.32.0-beta.0 | 31 Oct 23 17:37 UTC |          |
	|         | -p download-only-876817        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/31 17:37:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:37:29.003606  250505 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:37:29.003746  250505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:37:29.003765  250505 out.go:309] Setting ErrFile to fd 2...
	I1031 17:37:29.003773  250505 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:37:29.004011  250505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	W1031 17:37:29.004147  250505 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17530-243226/.minikube/config/config.json: open /home/jenkins/minikube-integration/17530-243226/.minikube/config/config.json: no such file or directory
	I1031 17:37:29.004633  250505 out.go:303] Setting JSON to true
	I1031 17:37:29.005699  250505 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4759,"bootTime":1698769090,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:37:29.005793  250505 start.go:138] virtualization: kvm guest
	I1031 17:37:29.008123  250505 out.go:97] [download-only-876817] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:37:29.010009  250505 out.go:169] MINIKUBE_LOCATION=17530
	I1031 17:37:29.008306  250505 notify.go:220] Checking for updates...
	I1031 17:37:29.013454  250505 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:37:29.015377  250505 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:37:29.017099  250505 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:37:29.018668  250505 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 17:37:29.021421  250505 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 17:37:29.021882  250505 config.go:182] Loaded profile config "download-only-876817": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1031 17:37:29.021937  250505 start.go:810] api.Load failed for download-only-876817: filestore "download-only-876817": Docker machine "download-only-876817" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 17:37:29.022029  250505 driver.go:378] Setting default libvirt URI to qemu:///system
	W1031 17:37:29.022082  250505 start.go:810] api.Load failed for download-only-876817: filestore "download-only-876817": Docker machine "download-only-876817" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 17:37:29.056047  250505 out.go:97] Using the kvm2 driver based on existing profile
	I1031 17:37:29.056081  250505 start.go:298] selected driver: kvm2
	I1031 17:37:29.056086  250505 start.go:902] validating driver "kvm2" against &{Name:download-only-876817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-on
ly-876817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:37:29.056513  250505 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:37:29.056593  250505 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17530-243226/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:37:29.072146  250505 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0-beta.0
	I1031 17:37:29.072931  250505 cni.go:84] Creating CNI manager for ""
	I1031 17:37:29.072951  250505 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1031 17:37:29.072964  250505 start_flags.go:323] config:
	{Name:download-only-876817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-876817 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:37:29.073129  250505 iso.go:125] acquiring lock: {Name:mk2cff57f3c99ffff6630394b4a4517731e63118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:37:29.075310  250505 out.go:97] Starting control plane node download-only-876817 in cluster download-only-876817
	I1031 17:37:29.075334  250505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:37:29.424987  250505 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1031 17:37:29.425035  250505 cache.go:56] Caching tarball of preloaded images
	I1031 17:37:29.425188  250505 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1031 17:37:29.427078  250505 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1031 17:37:29.427103  250505 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:37:29.531726  250505 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /home/jenkins/minikube-integration/17530-243226/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-876817"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-876817
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-926594 --alsologtostderr --binary-mirror http://127.0.0.1:37317 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-926594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-926594
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (103s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-679450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-679450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m41.739838266s)
helpers_test.go:175: Cleaning up "offline-docker-679450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-679450
E1031 18:25:23.267049  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-679450: (1.263727687s)
--- PASS: TestOffline (103.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164380
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-164380: exit status 85 (69.50731ms)

                                                
                                                
-- stdout --
	* Profile "addons-164380" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164380"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164380
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-164380: exit status 85 (68.867065ms)

                                                
                                                
-- stdout --
	* Profile "addons-164380" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164380"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (159.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-164380 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-164380 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m39.483800092s)
--- PASS: TestAddons/Setup (159.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 25.873002ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-b5xrx" [343eb13e-05b7-4c76-a640-07d9883df87d] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023137766s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mljzh" [9e18dd6d-3458-4ea0-8799-60087776de59] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.021720652s
addons_test.go:339: (dbg) Run:  kubectl --context addons-164380 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-164380 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-164380 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.55572568s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-164380 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-164380 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-164380 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4043fd24-ae13-40ea-b5a4-64a141e4e8f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4043fd24-ae13-40ea-b5a4-64a141e4e8f4] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.012482027s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-164380 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.7
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-164380 addons disable ingress-dns --alsologtostderr -v=1: (3.268967373s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-164380 addons disable ingress --alsologtostderr -v=1: (7.84476704s)
--- PASS: TestAddons/parallel/Ingress (23.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5stdp" [a9f09c26-66af-4d36-abc5-206e02318e1d] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01522779s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-164380
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-164380: (5.717928491s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 25.628862ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-lgljv" [0caf7c2f-a0fb-43d6-8368-7949118d1c36] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.029034308s
addons_test.go:414: (dbg) Run:  kubectl --context addons-164380 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.75s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.154506ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-cz8xg" [78b104da-8f45-4817-b0a4-809a40d68879] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014058153s
addons_test.go:472: (dbg) Run:  kubectl --context addons-164380 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-164380 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.160247614s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (95.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 26.136819ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-164380 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/10/31 17:40:39 [DEBUG] GET http://192.168.39.7:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-164380 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f1bbcf97-5c7e-499a-88b6-d8f1e210ed2d] Pending
helpers_test.go:344: "task-pv-pod" [f1bbcf97-5c7e-499a-88b6-d8f1e210ed2d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f1bbcf97-5c7e-499a-88b6-d8f1e210ed2d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.019629564s
addons_test.go:583: (dbg) Run:  kubectl --context addons-164380 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-164380 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-164380 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-164380 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-164380 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-164380 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-164380 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-164380 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5d7e686e-b993-4a0b-a19a-7760777c3859] Pending
helpers_test.go:344: "task-pv-pod-restore" [5d7e686e-b993-4a0b-a19a-7760777c3859] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5d7e686e-b993-4a0b-a19a-7760777c3859] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.013522131s
addons_test.go:625: (dbg) Run:  kubectl --context addons-164380 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-164380 delete pod task-pv-pod-restore: (1.052296335s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-164380 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-164380 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-164380 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.750211997s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (95.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-164380 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-164380 --alsologtostderr -v=1: (1.131715689s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-rdgzw" [bb57585a-1052-4e28-8b0a-d6f96eb23c16] Pending
helpers_test.go:344: "headlamp-94b766c-rdgzw" [bb57585a-1052-4e28-8b0a-d6f96eb23c16] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-rdgzw" [bb57585a-1052-4e28-8b0a-d6f96eb23c16] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.02634122s
--- PASS: TestAddons/parallel/Headlamp (16.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-j5hs6" [bd22e2eb-2768-46a4-94d0-86844d803a56] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009363307s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-164380
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-164380 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-164380 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7b6a060e-c2a1-47e5-ad4a-5149b8f48768] Pending
helpers_test.go:344: "test-local-path" [7b6a060e-c2a1-47e5-ad4a-5149b8f48768] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7b6a060e-c2a1-47e5-ad4a-5149b8f48768] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7b6a060e-c2a1-47e5-ad4a-5149b8f48768] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.015771025s
addons_test.go:890: (dbg) Run:  kubectl --context addons-164380 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 ssh "cat /opt/local-path-provisioner/pvc-d74036d5-eade-4987-b4a4-c56f3bbf35e2_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-164380 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-164380 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-164380 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-164380 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.466078532s)
--- PASS: TestAddons/parallel/LocalPath (55.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lvzxp" [738dd5de-b20a-44a0-8684-bebc22cf413b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.013411614s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-164380
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-164380 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-164380 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-164380
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-164380: (13.123930232s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164380
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164380
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-164380
--- PASS: TestAddons/StoppedEnableDisable (13.47s)

                                                
                                    
x
+
TestCertOptions (84.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-863571 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-863571 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m22.667884045s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-863571 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-863571 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-863571 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-863571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-863571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-863571: (1.120056535s)
--- PASS: TestCertOptions (84.37s)

                                                
                                    
x
+
TestCertExpiration (295.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-404624 --memory=2048 --cert-expiration=3m --driver=kvm2 
E1031 18:29:51.058256  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-404624 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m26.083966157s)
E1031 18:31:12.979893  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:31:15.983609  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-404624 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-404624 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (28.175510058s)
helpers_test.go:175: Cleaning up "cert-expiration-404624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-404624
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-404624: (1.238502383s)
--- PASS: TestCertExpiration (295.50s)

                                                
                                    
x
+
TestDockerFlags (64.66s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-118572 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-118572 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m3.168177804s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-118572 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-118572 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-118572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-118572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-118572: (1.029783169s)
--- PASS: TestDockerFlags (64.66s)

                                                
                                    
x
+
TestForceSystemdFlag (80.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-051346 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E1031 18:28:29.135352  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.140718  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.151066  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.171423  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.211916  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.292265  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.452754  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:29.773420  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:30.413864  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:31.694298  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:34.254741  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:39.375871  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:28:49.616599  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-051346 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m18.918653287s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-051346 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-051346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-051346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-051346: (1.18343415s)
--- PASS: TestForceSystemdFlag (80.40s)

                                                
                                    
x
+
TestForceSystemdEnv (80.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-230459 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-230459 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m18.041054336s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-230459 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-230459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-230459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-230459: (1.550829237s)
--- PASS: TestForceSystemdEnv (80.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.07s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.07s)

                                                
                                    
x
+
TestErrorSpam/setup (49.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-622138 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-622138 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-622138 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-622138 --driver=kvm2 : (49.896205679s)
--- PASS: TestErrorSpam/setup (49.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 pause
--- PASS: TestErrorSpam/pause (1.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (12.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 stop: (12.418472651s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-622138 --log_dir /tmp/nospam-622138 stop
--- PASS: TestErrorSpam/stop (12.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17530-243226/.minikube/files/etc/test/nested/copy/250411/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-002320 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.561958914s)
--- PASS: TestFunctional/serial/StartWithProxy (63.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-002320 --alsologtostderr -v=8: (38.586363992s)
functional_test.go:659: soft start took 38.587048879s for "functional-002320" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-002320 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:3.1: (1.302215093s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:3.3
E1031 17:45:23.266649  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.272346  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.282624  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.302928  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.343262  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.423632  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:23.584066  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:3.3: (1.27919693s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:latest
E1031 17:45:23.904693  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:24.545670  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 cache add registry.k8s.io/pause:latest: (1.248419045s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-002320 /tmp/TestFunctionalserialCacheCmdcacheadd_local1578288640/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache add minikube-local-cache-test:functional-002320
E1031 17:45:25.826171  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 cache add minikube-local-cache-test:functional-002320: (1.468034003s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache delete minikube-local-cache-test:functional-002320
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-002320
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.179877ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cache reload
E1031 17:45:28.387247  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 kubectl -- --context functional-002320 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-002320 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1031 17:45:33.507913  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:45:43.748873  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:46:04.229744  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-002320 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.959864854s)
functional_test.go:757: restart took 38.960047587s for "functional-002320" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-002320 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 logs: (1.171990528s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 logs --file /tmp/TestFunctionalserialLogsFileCmd2136205152/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 logs --file /tmp/TestFunctionalserialLogsFileCmd2136205152/001/logs.txt: (1.14217999s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-002320 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-002320
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-002320: exit status 115 (315.583379ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.191:32613 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-002320 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-002320 delete -f testdata/invalidsvc.yaml: (1.583822202s)
--- PASS: TestFunctional/serial/InvalidService (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 config get cpus: exit status 14 (97.236116ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 config get cpus: exit status 14 (71.814268ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-002320 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-002320 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 256463: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-002320 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (184.226843ms)

                                                
                                                
-- stdout --
	* [functional-002320] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:46:17.218160  256165 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:46:17.218377  256165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:46:17.218389  256165 out.go:309] Setting ErrFile to fd 2...
	I1031 17:46:17.218397  256165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:46:17.218656  256165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:46:17.219659  256165 out.go:303] Setting JSON to false
	I1031 17:46:17.220922  256165 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5288,"bootTime":1698769090,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:46:17.221067  256165 start.go:138] virtualization: kvm guest
	I1031 17:46:17.223622  256165 out.go:177] * [functional-002320] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:46:17.225328  256165 notify.go:220] Checking for updates...
	I1031 17:46:17.225334  256165 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:46:17.227070  256165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:46:17.228847  256165 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:46:17.230919  256165 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:46:17.232889  256165 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:46:17.235038  256165 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:46:17.237252  256165 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:46:17.237742  256165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:46:17.237835  256165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:46:17.254151  256165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I1031 17:46:17.254679  256165 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:46:17.256122  256165 main.go:141] libmachine: Using API Version  1
	I1031 17:46:17.256161  256165 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:46:17.256699  256165 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:46:17.256957  256165 main.go:141] libmachine: (functional-002320) Calling .DriverName
	I1031 17:46:17.257258  256165 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:46:17.257684  256165 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:46:17.257744  256165 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:46:17.275604  256165 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I1031 17:46:17.276220  256165 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:46:17.276883  256165 main.go:141] libmachine: Using API Version  1
	I1031 17:46:17.276922  256165 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:46:17.277536  256165 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:46:17.277914  256165 main.go:141] libmachine: (functional-002320) Calling .DriverName
	I1031 17:46:17.317732  256165 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 17:46:17.319466  256165 start.go:298] selected driver: kvm2
	I1031 17:46:17.319489  256165 start.go:902] validating driver "kvm2" against &{Name:functional-002320 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-002
320 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:46:17.319635  256165 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:46:17.322494  256165 out.go:177] 
	W1031 17:46:17.324110  256165 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 17:46:17.325579  256165 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-002320 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-002320 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (196.25299ms)

                                                
                                                
-- stdout --
	* [functional-002320] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:46:17.024093  256106 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:46:17.024225  256106 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:46:17.024234  256106 out.go:309] Setting ErrFile to fd 2...
	I1031 17:46:17.024242  256106 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:46:17.024810  256106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 17:46:17.025684  256106 out.go:303] Setting JSON to false
	I1031 17:46:17.026986  256106 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5287,"bootTime":1698769090,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:46:17.027081  256106 start.go:138] virtualization: kvm guest
	I1031 17:46:17.029862  256106 out.go:177] * [functional-002320] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I1031 17:46:17.031672  256106 out.go:177]   - MINIKUBE_LOCATION=17530
	I1031 17:46:17.031731  256106 notify.go:220] Checking for updates...
	I1031 17:46:17.033630  256106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:46:17.035292  256106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	I1031 17:46:17.037016  256106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	I1031 17:46:17.038643  256106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:46:17.040272  256106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1031 17:46:17.042543  256106 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 17:46:17.043184  256106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:46:17.043252  256106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:46:17.062516  256106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I1031 17:46:17.063164  256106 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:46:17.063879  256106 main.go:141] libmachine: Using API Version  1
	I1031 17:46:17.063899  256106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:46:17.064323  256106 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:46:17.064616  256106 main.go:141] libmachine: (functional-002320) Calling .DriverName
	I1031 17:46:17.064938  256106 driver.go:378] Setting default libvirt URI to qemu:///system
	I1031 17:46:17.065348  256106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:46:17.065389  256106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 17:46:17.085886  256106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I1031 17:46:17.086478  256106 main.go:141] libmachine: () Calling .GetVersion
	I1031 17:46:17.087045  256106 main.go:141] libmachine: Using API Version  1
	I1031 17:46:17.087070  256106 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 17:46:17.087464  256106 main.go:141] libmachine: () Calling .GetMachineName
	I1031 17:46:17.087685  256106 main.go:141] libmachine: (functional-002320) Calling .DriverName
	I1031 17:46:17.130846  256106 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1031 17:46:17.132454  256106 start.go:298] selected driver: kvm2
	I1031 17:46:17.132475  256106 start.go:902] validating driver "kvm2" against &{Name:functional-002320 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.41@sha256:dbb2380b629f0776f6e6e49b5825fe42814849b2a6ad4707fbcf87004835f612 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-002
320 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1031 17:46:17.132705  256106 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:46:17.135484  256106 out.go:177] 
	W1031 17:46:17.137057  256106 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 17:46:17.138657  256106 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-002320 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-002320 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xfqb9" [0e7ab27e-ebad-4b96-9353-a9f3bae971e0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xfqb9" [0e7ab27e-ebad-4b96-9353-a9f3bae971e0] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.035612484s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.191:31263
functional_test.go:1674: http://192.168.39.191:31263: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xfqb9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.191:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.191:31263
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9b1b3147-899c-4533-ba5b-86a420bd08bb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015092771s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-002320 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-002320 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-002320 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-002320 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-002320 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cad03277-fb95-48eb-b81f-8af8cd042355] Pending
helpers_test.go:344: "sp-pod" [cad03277-fb95-48eb-b81f-8af8cd042355] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cad03277-fb95-48eb-b81f-8af8cd042355] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.013630475s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-002320 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-002320 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-002320 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8544a1ac-7dc3-4a62-ae08-30af4aff768b] Pending
helpers_test.go:344: "sp-pod" [8544a1ac-7dc3-4a62-ae08-30af4aff768b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8544a1ac-7dc3-4a62-ae08-30af4aff768b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.019791015s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-002320 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh -n functional-002320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 cp functional-002320:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1677518588/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh -n functional-002320 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-002320 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-7d7jn" [0b7998bb-08dd-46ef-87d1-c9f0c4bb37ab] Pending
helpers_test.go:344: "mysql-859648c796-7d7jn" [0b7998bb-08dd-46ef-87d1-c9f0c4bb37ab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2023/10/31 17:46:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-859648c796-7d7jn" [0b7998bb-08dd-46ef-87d1-c9f0c4bb37ab] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.029099651s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;": exit status 1 (349.445884ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;": exit status 1 (216.176104ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;": exit status 1 (156.553341ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-002320 exec mysql-859648c796-7d7jn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/250411/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /etc/test/nested/copy/250411/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/250411.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /etc/ssl/certs/250411.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/250411.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /usr/share/ca-certificates/250411.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2504112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /etc/ssl/certs/2504112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2504112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /usr/share/ca-certificates/2504112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-002320 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh "sudo systemctl is-active crio": exit status 1 (282.201083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-002320 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-002320 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-22kd6" [294ba20d-96e6-4d57-ac4b-74addc528dbc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-22kd6" [294ba20d-96e6-4d57-ac4b-74addc528dbc] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.03925573s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "353.417435ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "68.933654ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdany-port3424752868/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698774376276433665" to /tmp/TestFunctionalparallelMountCmdany-port3424752868/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698774376276433665" to /tmp/TestFunctionalparallelMountCmdany-port3424752868/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698774376276433665" to /tmp/TestFunctionalparallelMountCmdany-port3424752868/001/test-1698774376276433665
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.757124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 17:46 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 17:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 17:46 test-1698774376276433665
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh cat /mount-9p/test-1698774376276433665
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-002320 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [be256202-aaca-4e41-b604-d0b749a0c1e2] Pending
helpers_test.go:344: "busybox-mount" [be256202-aaca-4e41-b604-d0b749a0c1e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [be256202-aaca-4e41-b604-d0b749a0c1e2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [be256202-aaca-4e41-b604-d0b749a0c1e2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.046564044s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-002320 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdany-port3424752868/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "266.403329ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.768543ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-002320 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-002320
docker.io/library/minikube-local-cache-test:functional-002320
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-002320 image ls --format short --alsologtostderr:
I1031 17:46:44.395786  258063 out.go:296] Setting OutFile to fd 1 ...
I1031 17:46:44.395933  258063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:44.395946  258063 out.go:309] Setting ErrFile to fd 2...
I1031 17:46:44.395954  258063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:44.396247  258063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
I1031 17:46:44.397086  258063 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:44.397230  258063 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:44.397834  258063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:44.397906  258063 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:44.413899  258063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34761
I1031 17:46:44.414484  258063 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:44.415053  258063 main.go:141] libmachine: Using API Version  1
I1031 17:46:44.415081  258063 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:44.415544  258063 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:44.415783  258063 main.go:141] libmachine: (functional-002320) Calling .GetState
I1031 17:46:44.417673  258063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:44.417736  258063 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:44.432852  258063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
I1031 17:46:44.433407  258063 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:44.434020  258063 main.go:141] libmachine: Using API Version  1
I1031 17:46:44.434082  258063 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:44.434504  258063 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:44.434691  258063 main.go:141] libmachine: (functional-002320) Calling .DriverName
I1031 17:46:44.434944  258063 ssh_runner.go:195] Run: systemctl --version
I1031 17:46:44.434981  258063 main.go:141] libmachine: (functional-002320) Calling .GetSSHHostname
I1031 17:46:44.438347  258063 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:44.438843  258063 main.go:141] libmachine: (functional-002320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c4:37", ip: ""} in network mk-functional-002320: {Iface:virbr1 ExpiryTime:2023-10-31 18:43:54 +0000 UTC Type:0 Mac:52:54:00:62:c4:37 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-002320 Clientid:01:52:54:00:62:c4:37}
I1031 17:46:44.438876  258063 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined IP address 192.168.39.191 and MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:44.438997  258063 main.go:141] libmachine: (functional-002320) Calling .GetSSHPort
I1031 17:46:44.439200  258063 main.go:141] libmachine: (functional-002320) Calling .GetSSHKeyPath
I1031 17:46:44.439412  258063 main.go:141] libmachine: (functional-002320) Calling .GetSSHUsername
I1031 17:46:44.439556  258063 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/functional-002320/id_rsa Username:docker}
I1031 17:46:44.552199  258063 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 17:46:44.583945  258063 main.go:141] libmachine: Making call to close driver server
I1031 17:46:44.583958  258063 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:44.584262  258063 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:44.584295  258063 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:44.584301  258063 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
I1031 17:46:44.584307  258063 main.go:141] libmachine: Making call to close driver server
I1031 17:46:44.584326  258063 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:44.584615  258063 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:44.584632  258063 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-002320 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/minikube-local-cache-test | functional-002320 | 5cc6d99be8ccd | 30B    |
| gcr.io/google-containers/addon-resizer      | functional-002320 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-002320 image ls --format table --alsologtostderr:
I1031 17:46:45.398576  258222 out.go:296] Setting OutFile to fd 1 ...
I1031 17:46:45.398748  258222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.398764  258222 out.go:309] Setting ErrFile to fd 2...
I1031 17:46:45.398779  258222 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.398989  258222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
I1031 17:46:45.399563  258222 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.399687  258222 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.400084  258222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.400148  258222 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.414957  258222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43243
I1031 17:46:45.415477  258222 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.416160  258222 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.416190  258222 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.416620  258222 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.416865  258222 main.go:141] libmachine: (functional-002320) Calling .GetState
I1031 17:46:45.419208  258222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.419264  258222 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.434113  258222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35081
I1031 17:46:45.434610  258222 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.435164  258222 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.435189  258222 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.435572  258222 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.435818  258222 main.go:141] libmachine: (functional-002320) Calling .DriverName
I1031 17:46:45.436058  258222 ssh_runner.go:195] Run: systemctl --version
I1031 17:46:45.436081  258222 main.go:141] libmachine: (functional-002320) Calling .GetSSHHostname
I1031 17:46:45.439198  258222 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.439557  258222 main.go:141] libmachine: (functional-002320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c4:37", ip: ""} in network mk-functional-002320: {Iface:virbr1 ExpiryTime:2023-10-31 18:43:54 +0000 UTC Type:0 Mac:52:54:00:62:c4:37 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-002320 Clientid:01:52:54:00:62:c4:37}
I1031 17:46:45.439600  258222 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined IP address 192.168.39.191 and MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.439769  258222 main.go:141] libmachine: (functional-002320) Calling .GetSSHPort
I1031 17:46:45.439983  258222 main.go:141] libmachine: (functional-002320) Calling .GetSSHKeyPath
I1031 17:46:45.440148  258222 main.go:141] libmachine: (functional-002320) Calling .GetSSHUsername
I1031 17:46:45.440317  258222 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/functional-002320/id_rsa Username:docker}
I1031 17:46:45.564607  258222 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 17:46:45.607600  258222 main.go:141] libmachine: Making call to close driver server
I1031 17:46:45.607620  258222 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:45.607926  258222 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:45.607954  258222 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:45.607967  258222 main.go:141] libmachine: Making call to close driver server
I1031 17:46:45.607996  258222 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:45.608269  258222 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
I1031 17:46:45.608306  258222 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:45.608321  258222 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-002320 image ls --format json --alsologtostderr:
[{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-002320"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5cc6d99be8ccd55b07460fa1c83812065484b0aee0d3e1d35dc1eb9
03806ddfd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-002320"],"size":"30"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDiges
ts":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-002320 image ls --format json --alsologtostderr:
I1031 17:46:45.141969  258176 out.go:296] Setting OutFile to fd 1 ...
I1031 17:46:45.142137  258176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.142148  258176 out.go:309] Setting ErrFile to fd 2...
I1031 17:46:45.142153  258176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.142460  258176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
I1031 17:46:45.143152  258176 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.143289  258176 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.143731  258176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.143785  258176 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.160372  258176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
I1031 17:46:45.160940  258176 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.161576  258176 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.161600  258176 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.162042  258176 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.162289  258176 main.go:141] libmachine: (functional-002320) Calling .GetState
I1031 17:46:45.164602  258176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.164647  258176 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.180838  258176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
I1031 17:46:45.181487  258176 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.182168  258176 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.182195  258176 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.182684  258176 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.182936  258176 main.go:141] libmachine: (functional-002320) Calling .DriverName
I1031 17:46:45.183189  258176 ssh_runner.go:195] Run: systemctl --version
I1031 17:46:45.183222  258176 main.go:141] libmachine: (functional-002320) Calling .GetSSHHostname
I1031 17:46:45.186729  258176 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.187095  258176 main.go:141] libmachine: (functional-002320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c4:37", ip: ""} in network mk-functional-002320: {Iface:virbr1 ExpiryTime:2023-10-31 18:43:54 +0000 UTC Type:0 Mac:52:54:00:62:c4:37 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-002320 Clientid:01:52:54:00:62:c4:37}
I1031 17:46:45.187136  258176 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined IP address 192.168.39.191 and MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.187376  258176 main.go:141] libmachine: (functional-002320) Calling .GetSSHPort
I1031 17:46:45.187597  258176 main.go:141] libmachine: (functional-002320) Calling .GetSSHKeyPath
I1031 17:46:45.187774  258176 main.go:141] libmachine: (functional-002320) Calling .GetSSHUsername
I1031 17:46:45.187928  258176 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/functional-002320/id_rsa Username:docker}
I1031 17:46:45.290777  258176 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 17:46:45.329081  258176 main.go:141] libmachine: Making call to close driver server
I1031 17:46:45.329098  258176 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:45.329469  258176 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:45.329504  258176 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:45.329483  258176 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
I1031 17:46:45.329520  258176 main.go:141] libmachine: Making call to close driver server
I1031 17:46:45.329532  258176 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:45.329765  258176 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:45.329784  258176 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:45.329801  258176 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-002320 image ls --format yaml --alsologtostderr:
- id: 5cc6d99be8ccd55b07460fa1c83812065484b0aee0d3e1d35dc1eb903806ddfd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-002320
size: "30"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-002320
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-002320 image ls --format yaml --alsologtostderr:
I1031 17:46:44.652419  258087 out.go:296] Setting OutFile to fd 1 ...
I1031 17:46:44.652670  258087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:44.652681  258087 out.go:309] Setting ErrFile to fd 2...
I1031 17:46:44.652686  258087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:44.652902  258087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
I1031 17:46:44.653517  258087 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:44.653640  258087 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:44.654068  258087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:44.654140  258087 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:44.669981  258087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
I1031 17:46:44.670540  258087 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:44.671293  258087 main.go:141] libmachine: Using API Version  1
I1031 17:46:44.671329  258087 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:44.671741  258087 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:44.671979  258087 main.go:141] libmachine: (functional-002320) Calling .GetState
I1031 17:46:44.674008  258087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:44.674089  258087 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:44.689410  258087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
I1031 17:46:44.689873  258087 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:44.690511  258087 main.go:141] libmachine: Using API Version  1
I1031 17:46:44.690550  258087 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:44.690902  258087 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:44.691091  258087 main.go:141] libmachine: (functional-002320) Calling .DriverName
I1031 17:46:44.691379  258087 ssh_runner.go:195] Run: systemctl --version
I1031 17:46:44.691410  258087 main.go:141] libmachine: (functional-002320) Calling .GetSSHHostname
I1031 17:46:44.694611  258087 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:44.694986  258087 main.go:141] libmachine: (functional-002320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c4:37", ip: ""} in network mk-functional-002320: {Iface:virbr1 ExpiryTime:2023-10-31 18:43:54 +0000 UTC Type:0 Mac:52:54:00:62:c4:37 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-002320 Clientid:01:52:54:00:62:c4:37}
I1031 17:46:44.695021  258087 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined IP address 192.168.39.191 and MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:44.695151  258087 main.go:141] libmachine: (functional-002320) Calling .GetSSHPort
I1031 17:46:44.695371  258087 main.go:141] libmachine: (functional-002320) Calling .GetSSHKeyPath
I1031 17:46:44.695579  258087 main.go:141] libmachine: (functional-002320) Calling .GetSSHUsername
I1031 17:46:44.695756  258087 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/functional-002320/id_rsa Username:docker}
I1031 17:46:44.810012  258087 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1031 17:46:44.865421  258087 main.go:141] libmachine: Making call to close driver server
I1031 17:46:44.865439  258087 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:44.865764  258087 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:44.865785  258087 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:44.865802  258087 main.go:141] libmachine: Making call to close driver server
I1031 17:46:44.865813  258087 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:44.865821  258087 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
I1031 17:46:44.866188  258087 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:44.866209  258087 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
I1031 17:46:44.866240  258087 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh pgrep buildkitd: exit status 1 (277.318526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image build -t localhost/my-image:functional-002320 testdata/build --alsologtostderr
E1031 17:46:45.190246  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image build -t localhost/my-image:functional-002320 testdata/build --alsologtostderr: (4.361579278s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-002320 image build -t localhost/my-image:functional-002320 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d80b9c9873ee
Removing intermediate container d80b9c9873ee
---> 70831945f0e6
Step 3/3 : ADD content.txt /
---> 3e19bad84b30
Successfully built 3e19bad84b30
Successfully tagged localhost/my-image:functional-002320
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-002320 image build -t localhost/my-image:functional-002320 testdata/build --alsologtostderr:
I1031 17:46:45.223709  258188 out.go:296] Setting OutFile to fd 1 ...
I1031 17:46:45.224041  258188 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.224054  258188 out.go:309] Setting ErrFile to fd 2...
I1031 17:46:45.224062  258188 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1031 17:46:45.224284  258188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
I1031 17:46:45.224932  258188 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.225636  258188 config.go:182] Loaded profile config "functional-002320": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1031 17:46:45.226085  258188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.226181  258188 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.241917  258188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
I1031 17:46:45.242528  258188 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.243159  258188 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.243193  258188 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.243544  258188 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.243725  258188 main.go:141] libmachine: (functional-002320) Calling .GetState
I1031 17:46:45.246305  258188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1031 17:46:45.246360  258188 main.go:141] libmachine: Launching plugin server for driver kvm2
I1031 17:46:45.264191  258188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
I1031 17:46:45.264731  258188 main.go:141] libmachine: () Calling .GetVersion
I1031 17:46:45.265400  258188 main.go:141] libmachine: Using API Version  1
I1031 17:46:45.265423  258188 main.go:141] libmachine: () Calling .SetConfigRaw
I1031 17:46:45.265883  258188 main.go:141] libmachine: () Calling .GetMachineName
I1031 17:46:45.266108  258188 main.go:141] libmachine: (functional-002320) Calling .DriverName
I1031 17:46:45.266335  258188 ssh_runner.go:195] Run: systemctl --version
I1031 17:46:45.266363  258188 main.go:141] libmachine: (functional-002320) Calling .GetSSHHostname
I1031 17:46:45.269647  258188 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.270056  258188 main.go:141] libmachine: (functional-002320) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:c4:37", ip: ""} in network mk-functional-002320: {Iface:virbr1 ExpiryTime:2023-10-31 18:43:54 +0000 UTC Type:0 Mac:52:54:00:62:c4:37 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-002320 Clientid:01:52:54:00:62:c4:37}
I1031 17:46:45.270092  258188 main.go:141] libmachine: (functional-002320) DBG | domain functional-002320 has defined IP address 192.168.39.191 and MAC address 52:54:00:62:c4:37 in network mk-functional-002320
I1031 17:46:45.270386  258188 main.go:141] libmachine: (functional-002320) Calling .GetSSHPort
I1031 17:46:45.270578  258188 main.go:141] libmachine: (functional-002320) Calling .GetSSHKeyPath
I1031 17:46:45.270750  258188 main.go:141] libmachine: (functional-002320) Calling .GetSSHUsername
I1031 17:46:45.270896  258188 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17530-243226/.minikube/machines/functional-002320/id_rsa Username:docker}
I1031 17:46:45.364372  258188 build_images.go:151] Building image from path: /tmp/build.4267748645.tar
I1031 17:46:45.364450  258188 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1031 17:46:45.375340  258188 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4267748645.tar
I1031 17:46:45.386517  258188 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4267748645.tar: stat -c "%s %y" /var/lib/minikube/build/build.4267748645.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4267748645.tar': No such file or directory
I1031 17:46:45.386564  258188 ssh_runner.go:362] scp /tmp/build.4267748645.tar --> /var/lib/minikube/build/build.4267748645.tar (3072 bytes)
I1031 17:46:45.425303  258188 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4267748645
I1031 17:46:45.440727  258188 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4267748645 -xf /var/lib/minikube/build/build.4267748645.tar
I1031 17:46:45.463763  258188 docker.go:347] Building image: /var/lib/minikube/build/build.4267748645
I1031 17:46:45.463853  258188 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-002320 /var/lib/minikube/build/build.4267748645
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1031 17:46:49.477315  258188 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-002320 /var/lib/minikube/build/build.4267748645: (4.013434318s)
I1031 17:46:49.477392  258188 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4267748645
I1031 17:46:49.492989  258188 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4267748645.tar
I1031 17:46:49.507706  258188 build_images.go:207] Built localhost/my-image:functional-002320 from /tmp/build.4267748645.tar
I1031 17:46:49.507750  258188 build_images.go:123] succeeded building to: functional-002320
I1031 17:46:49.507757  258188 build_images.go:124] failed building to: 
I1031 17:46:49.507787  258188 main.go:141] libmachine: Making call to close driver server
I1031 17:46:49.507805  258188 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:49.508139  258188 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:49.508158  258188 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:49.508169  258188 main.go:141] libmachine: Making call to close driver server
I1031 17:46:49.508180  258188 main.go:141] libmachine: (functional-002320) Calling .Close
I1031 17:46:49.508467  258188 main.go:141] libmachine: Successfully made call to close driver server
I1031 17:46:49.508497  258188 main.go:141] libmachine: Making call to close connection to plugin binary
I1031 17:46:49.508473  258188 main.go:141] libmachine: (functional-002320) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.909252476s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-002320
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr: (3.867408859s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr: (2.349006411s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.929314005s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-002320
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image load --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr: (5.392756989s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdspecific-port2840101384/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.023045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdspecific-port2840101384/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh "sudo umount -f /mount-9p": exit status 1 (260.054729ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-002320 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdspecific-port2840101384/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T" /mount1: exit status 1 (375.363961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-002320 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-002320 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3760890485/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service list -o json
functional_test.go:1493: Took "728.379574ms" to run "out/minikube-linux-amd64 -p functional-002320 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.191:31318
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.191:31318
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-002320 docker-env) && out/minikube-linux-amd64 status -p functional-002320"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-002320 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image save gcr.io/google-containers/addon-resizer:functional-002320 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image save gcr.io/google-containers/addon-resizer:functional-002320 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.583696993s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image rm gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.300660492s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-002320
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-002320 image save --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-002320 image save --daemon gcr.io/google-containers/addon-resizer:functional-002320 --alsologtostderr: (1.336435537s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-002320
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-002320
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-002320
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-002320
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (335.19s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-958664 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-958664 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m11.546005957s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-958664 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-958664 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.729878902s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-958664 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-958664 addons enable gvisor: (4.3860929s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a46f403d-c02b-4c9b-b9e3-02548b6bd418] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.028021504s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-958664 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [a50d2d4d-7407-44e0-a995-2dbc19e2adf9] Pending
helpers_test.go:344: "nginx-gvisor" [a50d2d4d-7407-44e0-a995-2dbc19e2adf9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [a50d2d4d-7407-44e0-a995-2dbc19e2adf9] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.030145775s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-958664
E1031 18:30:23.267590  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-958664: (1m31.928887525s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-958664 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-958664 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (50.738863885s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a46f403d-c02b-4c9b-b9e3-02548b6bd418] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [a46f403d-c02b-4c9b-b9e3-02548b6bd418] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.029310804s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [a50d2d4d-7407-44e0-a995-2dbc19e2adf9] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.014598795s
helpers_test.go:175: Cleaning up "gvisor-958664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-958664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-958664: (1.443234885s)
--- PASS: TestGvisorAddon (335.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-703765 --driver=kvm2 
E1031 17:48:07.110600  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-703765 --driver=kvm2 : (52.464332839s)
--- PASS: TestImageBuild/serial/Setup (52.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-703765
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-703765: (2.272651463s)
--- PASS: TestImageBuild/serial/NormalBuild (2.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-703765
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-703765: (1.367950865s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-703765
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-703765
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (90.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-243872 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-243872 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m30.460518854s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (90.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons enable ingress --alsologtostderr -v=5: (18.537657729s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-243872 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1031 17:50:23.270340  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-243872 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.848660123s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-243872 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-243872 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6d578d08-fda7-4b28-b483-8651a74d5556] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6d578d08-fda7-4b28-b483-8651a74d5556] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.014992556s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-243872 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.116
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons disable ingress-dns --alsologtostderr -v=1: (2.142011645s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-243872 addons disable ingress --alsologtostderr -v=1: (7.622820596s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-810956 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1031 17:50:50.951560  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 17:51:15.984463  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:15.989792  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.000072  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.020378  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.060825  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.141211  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.301708  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:16.622394  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:17.263579  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:18.544615  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:21.106295  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:26.226552  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 17:51:36.467767  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-810956 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m3.659050389s)
--- PASS: TestJSONOutput/start/Command (63.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-810956 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-810956 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-810956 --output=json --user=testUser
E1031 17:51:56.948426  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-810956 --output=json --user=testUser: (7.428059144s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-680388 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-680388 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.217752ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fdc303d0-ca9a-409d-a38f-7edaefd3ca96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-680388] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5330da5-d353-4aed-a3f0-e907635717f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17530"}}
	{"specversion":"1.0","id":"c1b10362-73f8-4206-9b79-86bc973047c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ff02d663-8510-4443-8574-4a4bcaa343ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig"}}
	{"specversion":"1.0","id":"35c00646-6b3f-4cf2-a0d8-071b78b691e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube"}}
	{"specversion":"1.0","id":"71713cac-89d0-4713-b823-9d51a22a2b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7691b162-a239-4e03-a8d8-6b268ee5387c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8b7455e6-e79d-48a5-8602-e8c3d1de1551","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-680388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-680388
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (104.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-585449 --driver=kvm2 
E1031 17:52:37.908760  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-585449 --driver=kvm2 : (50.515539869s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-588431 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-588431 --driver=kvm2 : (51.174442741s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-585449
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-588431
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-588431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-588431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-588431: (1.001330904s)
helpers_test.go:175: Cleaning up "first-585449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-585449
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-585449: (1.035613083s)
--- PASS: TestMinikubeProfile (104.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-422707 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1031 17:53:59.829808  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-422707 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.488731151s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-422707 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-422707 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-444347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-444347 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.298812762s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-422707 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-444347
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-444347: (2.107893391s)
--- PASS: TestMountStart/serial/Stop (2.11s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-444347
E1031 17:55:13.565706  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.571057  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.581406  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.601739  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.642097  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.722419  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:13.882974  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:14.203231  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:14.844213  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 17:55:16.124891  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-444347: (24.727714623s)
--- PASS: TestMountStart/serial/RestartStopped (25.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-444347 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (165.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-441410
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-441410
E1031 18:10:23.270270  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-441410: (17.441357268s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-441410 --wait=true -v=8 --alsologtostderr
E1031 18:11:15.984051  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 18:11:36.612099  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-441410 --wait=true -v=8 --alsologtostderr: (2m27.833927262s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-441410
--- PASS: TestMultiNode/serial/RestartKeepsNodes (165.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 node delete m03: (1.250829126s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-441410 stop: (25.462854731s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status: exit status 7 (106.182579ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-441410-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr: exit status 7 (105.900847ms)

                                                
                                                
-- stdout --
	multinode-441410
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-441410-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:13:31.910849  268153 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:13:31.911120  268153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:13:31.911130  268153 out.go:309] Setting ErrFile to fd 2...
	I1031 18:13:31.911135  268153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:13:31.911324  268153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17530-243226/.minikube/bin
	I1031 18:13:31.911491  268153 out.go:303] Setting JSON to false
	I1031 18:13:31.911539  268153 mustload.go:65] Loading cluster: multinode-441410
	I1031 18:13:31.911668  268153 notify.go:220] Checking for updates...
	I1031 18:13:31.912103  268153 config.go:182] Loaded profile config "multinode-441410": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1031 18:13:31.912125  268153 status.go:255] checking status of multinode-441410 ...
	I1031 18:13:31.912608  268153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:13:31.912732  268153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:13:31.933820  268153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I1031 18:13:31.934392  268153 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:13:31.934978  268153 main.go:141] libmachine: Using API Version  1
	I1031 18:13:31.935005  268153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:13:31.935413  268153 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:13:31.935650  268153 main.go:141] libmachine: (multinode-441410) Calling .GetState
	I1031 18:13:31.937120  268153 status.go:330] multinode-441410 host status = "Stopped" (err=<nil>)
	I1031 18:13:31.937137  268153 status.go:343] host is not running, skipping remaining checks
	I1031 18:13:31.937145  268153 status.go:257] multinode-441410 status: &{Name:multinode-441410 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:13:31.937180  268153 status.go:255] checking status of multinode-441410-m02 ...
	I1031 18:13:31.937618  268153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:13:31.937669  268153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1031 18:13:31.952747  268153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I1031 18:13:31.953257  268153 main.go:141] libmachine: () Calling .GetVersion
	I1031 18:13:31.953804  268153 main.go:141] libmachine: Using API Version  1
	I1031 18:13:31.953834  268153 main.go:141] libmachine: () Calling .SetConfigRaw
	I1031 18:13:31.954228  268153 main.go:141] libmachine: () Calling .GetMachineName
	I1031 18:13:31.954400  268153 main.go:141] libmachine: (multinode-441410-m02) Calling .GetState
	I1031 18:13:31.956210  268153 status.go:330] multinode-441410-m02 host status = "Stopped" (err=<nil>)
	I1031 18:13:31.956226  268153 status.go:343] host is not running, skipping remaining checks
	I1031 18:13:31.956234  268153 status.go:257] multinode-441410-m02 status: &{Name:multinode-441410-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (104.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-441410 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1031 18:15:13.566024  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-441410 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m43.45250379s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-441410 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (104.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-441410
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-441410-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-441410-m02 --driver=kvm2 : exit status 14 (85.646728ms)

                                                
                                                
-- stdout --
	* [multinode-441410-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-441410-m02' is duplicated with machine name 'multinode-441410-m02' in profile 'multinode-441410'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-441410-m03 --driver=kvm2 
E1031 18:15:23.270883  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-441410-m03 --driver=kvm2 : (51.268081757s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-441410
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-441410: exit status 80 (243.220155ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-441410
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-441410-m03 already exists in multinode-441410-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-441410-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-441410-m03: (1.022143095s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.68s)

                                                
                                    
x
+
TestPreload (181.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-844082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1031 18:16:15.983937  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-844082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m29.549192876s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-844082 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-844082 image pull gcr.io/k8s-minikube/busybox: (2.116475011s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-844082
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-844082: (13.125021938s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-844082 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1031 18:18:26.313250  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-844082 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m15.223232247s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-844082 image list
helpers_test.go:175: Cleaning up "test-preload-844082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-844082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-844082: (1.060899398s)
--- PASS: TestPreload (181.30s)

                                                
                                    
x
+
TestScheduledStopUnix (124.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-482626 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-482626 --memory=2048 --driver=kvm2 : (52.275825503s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-482626 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-482626 -n scheduled-stop-482626
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-482626 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-482626 --cancel-scheduled
E1031 18:20:13.565191  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:20:23.270194  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-482626 -n scheduled-stop-482626
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-482626
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-482626 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1031 18:21:15.985003  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-482626
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-482626: exit status 7 (82.879931ms)

                                                
                                                
-- stdout --
	scheduled-stop-482626
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-482626 -n scheduled-stop-482626
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-482626 -n scheduled-stop-482626: exit status 7 (87.451742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-482626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-482626
--- PASS: TestScheduledStopUnix (124.21s)

                                                
                                    
x
+
TestSkaffold (142.6s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3613983618 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-897077 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-897077 --memory=2600 --driver=kvm2 : (49.736925244s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3613983618 run --minikube-profile skaffold-897077 --kube-context skaffold-897077 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3613983618 run --minikube-profile skaffold-897077 --kube-context skaffold-897077 --status-check=true --port-forward=false --interactive=false: (1m18.769752064s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-567545d556-xm4kz" [71371224-f5bf-4df5-a873-dfa3a3ba2ff5] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.022644274s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65984bd7bd-4g447" [47c8ab4f-8ea4-4993-a2d0-c78769d94d94] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011337353s
helpers_test.go:175: Cleaning up "skaffold-897077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-897077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-897077: (1.173945523s)
--- PASS: TestSkaffold (142.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.650163293.exe start -p running-upgrade-724969 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.650163293.exe start -p running-upgrade-724969 --memory=2200 --vm-driver=kvm2 : (1m58.417039465s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-724969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-724969 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m17.819163935s)
helpers_test.go:175: Cleaning up "running-upgrade-724969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-724969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-724969: (1.322180633s)
--- PASS: TestRunningBinaryUpgrade (199.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (218.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m14.661817375s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-731077
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-731077: (12.378259341s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-731077 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-731077 status --format={{.Host}}: exit status 7 (109.406123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1031 18:25:13.565791  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (47.327924191s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-731077 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (115.982338ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-731077] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-731077
	    minikube start -p kubernetes-upgrade-731077 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7310772 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-731077 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1031 18:26:15.983731  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-731077 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (1m23.107032474s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-731077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-731077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-731077: (1.123155451s)
--- PASS: TestKubernetesUpgrade (218.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (207.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.840515196.exe start -p stopped-upgrade-886228 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.840515196.exe start -p stopped-upgrade-886228 --memory=2200 --vm-driver=kvm2 : exit status 70 (1.884774804s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-886228] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig161704110
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s* 
	X Failed to cache ISO: rename /home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/minikube-v1.6.0.iso.download /home/jenkins/minikube-integration/17530-243226/.minikube/cache/iso/minikube-v1.6.0.iso: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.840515196.exe start -p stopped-upgrade-886228 --memory=2200 --vm-driver=kvm2 
E1031 18:24:19.031497  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.840515196.exe start -p stopped-upgrade-886228 --memory=2200 --vm-driver=kvm2 : (1m51.618082008s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.840515196.exe -p stopped-upgrade-886228 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.840515196.exe -p stopped-upgrade-886228 stop: (13.087949297s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-886228 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-886228 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.898122539s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (207.53s)

                                                
                                    
x
+
TestPause/serial/Start (75.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-584304 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-584304 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m15.397938329s)
--- PASS: TestPause/serial/Start (75.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-584304 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-584304 --alsologtostderr -v=1 --driver=kvm2 : (48.101570123s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (95.828253ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-954960] minikube v1.32.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17530-243226/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17530-243226/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954960 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954960 --driver=kvm2 : (53.166244566s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-954960 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-886228
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-886228: (1.483139602s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-584304 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-584304 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-584304 --output=json --layout=cluster: exit status 2 (314.0779ms)

                                                
                                                
-- stdout --
	{"Name":"pause-584304","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-584304","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-584304 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-584304 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-584304 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-584304 --alsologtostderr -v=5: (1.119571783s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --driver=kvm2 
E1031 18:28:16.612934  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --driver=kvm2 : (1m8.016474325s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-954960 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-954960 status -o json: exit status 2 (281.359547ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-954960","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-954960
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-954960: (1.082294463s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (68.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --driver=kvm2 
E1031 18:29:10.097776  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954960 --no-kubernetes --driver=kvm2 : (1m8.829093625s)
--- PASS: TestNoKubernetes/serial/Start (68.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-954960 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-954960 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.246992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-954960
E1031 18:30:13.566193  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-954960: (3.13100647s)
--- PASS: TestNoKubernetes/serial/Stop (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-954960 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-954960 --driver=kvm2 : (41.370740424s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m53.529672607s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-954960 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-954960 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.298757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (107.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m47.538563424s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (107.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zch4j" [3f6b3aad-2a1c-469a-a196-8855159e4fba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zch4j" [3f6b3aad-2a1c-469a-a196-8855159e4fba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.014247399s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m42.547295562s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kq4fq" [7027713a-2ace-4a8a-a281-ba6e416346b0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024101569s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bkpqc" [19b3cbae-cf92-4b6f-92dc-7ab62a770ee1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bkpqc" [19b3cbae-cf92-4b6f-92dc-7ab62a770ee1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.014814893s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m38.929924583s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1031 18:33:29.134977  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:33:56.820138  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m34.692443277s)
--- PASS: TestNetworkPlugins/group/false/Start (94.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xncs5" [97858e73-1b5c-42d7-8b94-e4a326cc7bc7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.035403692s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4mn4n" [a22cc809-7793-4e0e-ba0e-9f916cdc8044] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4mn4n" [a22cc809-7793-4e0e-ba0e-9f916cdc8044] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.01895631s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-88dtt" [02573c6c-239e-4ebf-934d-aa7d2a84ca29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-88dtt" [02573c6c-239e-4ebf-934d-aa7d2a84ca29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.015177536s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m15.805483027s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zdfbm" [72d7899a-0c7f-4dce-9548-5c8e4471d6c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 18:34:59.912657  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:34:59.918245  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:34:59.928610  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:34:59.948978  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:34:59.989140  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:35:00.070250  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:35:00.230819  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:35:00.551041  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:35:01.191884  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:35:02.472774  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zdfbm" [72d7899a-0c7f-4dce-9548-5c8e4471d6c0] Running
E1031 18:35:06.313496  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.01739029s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E1031 18:35:05.033250  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m28.732855469s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (110.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1031 18:35:10.153464  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m50.860447062s)
--- PASS: TestNetworkPlugins/group/bridge/Start (110.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (117.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E1031 18:35:40.874851  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-589414 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m57.647958218s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (117.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q7hk4" [900b3cae-0853-4b2f-9563-d9a11ba7d2a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q7hk4" [900b3cae-0853-4b2f-9563-d9a11ba7d2a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.015416372s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-976044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-976044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m44.933575041s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9cv2s" [91afa740-8a74-4e0a-bbc9-d9cd8697e2aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019326865s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jnvml" [5c2e3cb2-b4b8-4847-890d-fc868c8567fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jnvml" [5c2e3cb2-b4b8-4847-890d-fc868c8567fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.016346206s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zz244" [f74fb027-453b-476b-928f-1bdeec841176] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zz244" [f74fb027-453b-476b-928f-1bdeec841176] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.01584433s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-799191 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:37:17.672792  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
E1031 18:37:22.793990  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-799191 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (1m29.445406924s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-589414 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-589414 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8vwfz" [dadde9a8-04a4-41e3-bb70-0e84109a7477] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8vwfz" [dadde9a8-04a4-41e3-bb70-0e84109a7477] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.014329032s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (98.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-189930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:37:33.034243  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-189930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (1m38.212659567s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (98.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-589414 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-589414 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235459 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:38:06.497999  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
E1031 18:38:26.978208  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
E1031 18:38:29.135536  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:38:34.475634  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235459 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m31.669533722s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-799191 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9c9efb1-2468-48d9-8e18-218d8b547253] Pending
helpers_test.go:344: "busybox" [f9c9efb1-2468-48d9-8e18-218d8b547253] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9c9efb1-2468-48d9-8e18-218d8b547253] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.034659936s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-799191 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-799191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-799191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.300104328s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-799191 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-799191 --alsologtostderr -v=3
E1031 18:39:07.938518  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-799191 --alsologtostderr -v=3: (13.162331407s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-189930 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6b667ce4-e843-467b-b8fa-85322f5e2077] Pending
helpers_test.go:344: "busybox" [6b667ce4-e843-467b-b8fa-85322f5e2077] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6b667ce4-e843-467b-b8fa-85322f5e2077] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.026965598s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-189930 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-976044 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22843cea-2d72-48dd-a861-9577617349b5] Pending
helpers_test.go:344: "busybox" [22843cea-2d72-48dd-a861-9577617349b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22843cea-2d72-48dd-a861-9577617349b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.038678032s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-976044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799191 -n no-preload-799191
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799191 -n no-preload-799191: exit status 7 (115.74195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-799191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-799191 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-799191 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m36.715946712s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-799191 -n no-preload-799191
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-189930 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-189930 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.304998007s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-189930 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-976044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-976044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053525809s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-976044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-189930 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-189930 --alsologtostderr -v=3: (13.188986719s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-976044 --alsologtostderr -v=3
E1031 18:39:25.779082  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:25.784451  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:25.794796  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:25.815823  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:25.856251  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:25.936583  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:26.097421  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:26.417907  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:27.058945  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:28.339997  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:30.901085  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-976044 --alsologtostderr -v=3: (13.289619231s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-235459 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5030c185-a121-449e-9241-7ae5e6ae57d9] Pending
helpers_test.go:344: "busybox" [5030c185-a121-449e-9241-7ae5e6ae57d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 18:39:34.097649  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.102980  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.113334  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.133746  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.174423  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.254864  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.415487  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:34.736588  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5030c185-a121-449e-9241-7ae5e6ae57d9] Running
E1031 18:39:39.218455  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.023835054s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-235459 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189930 -n embed-certs-189930
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189930 -n embed-certs-189930: exit status 7 (97.62153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-189930 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (332.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-189930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:39:35.377265  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-189930 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m32.028163676s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189930 -n embed-certs-189930
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (332.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-976044 -n old-k8s-version-976044
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-976044 -n old-k8s-version-976044: exit status 7 (97.98269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-976044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (459.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-976044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1031 18:39:36.021363  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:36.657920  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-976044 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m39.074843371s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-976044 -n old-k8s-version-976044
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (459.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-235459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-235459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.033279493s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-235459 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-235459 --alsologtostderr -v=3
E1031 18:39:44.339246  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:46.262404  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:39:54.579774  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:39:56.396637  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-235459 --alsologtostderr -v=3: (13.133272519s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459: exit status 7 (97.43789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-235459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235459 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:39:58.782253  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:58.787532  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:58.797815  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:58.818163  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:58.858315  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:58.938939  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:59.099446  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:59.420001  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:39:59.912830  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:40:00.061038  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:01.342145  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:03.902681  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:06.742606  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:40:09.022949  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:13.565929  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
E1031 18:40:15.060543  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:40:19.263382  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:23.267341  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
E1031 18:40:27.597485  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:40:29.858756  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
E1031 18:40:39.744162  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:40:47.703837  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:40:53.480777  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.486118  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.496408  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.516774  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.557148  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.637586  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:53.798181  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:54.119396  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:54.760420  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:56.021279  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:40:56.041459  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:58.602300  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:40:59.032384  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 18:41:03.723154  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:41:13.963581  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:41:15.984143  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
E1031 18:41:20.704670  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:41:33.694794  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:33.700195  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:33.710664  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:33.731070  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:33.771488  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:33.851923  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:34.012610  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:34.333451  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:34.444397  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:41:34.974439  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:36.255570  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:38.816077  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:43.937070  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:54.177747  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:41:59.111331  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.116627  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.126949  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.147422  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.187744  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.268081  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.429086  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:41:59.749724  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:00.389975  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:01.670193  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:04.230698  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:09.351037  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:09.624414  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:42:12.551347  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
E1031 18:42:14.658881  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:42:15.404839  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:42:17.941963  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:42:19.591472  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:28.009352  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.014726  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.025110  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.045541  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.085953  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.166441  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.326877  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:28.647900  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:29.288428  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:30.569351  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:33.129584  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:38.250300  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:40.071725  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:42:40.237559  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
E1031 18:42:42.625019  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:42:46.015119  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
E1031 18:42:48.491210  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:42:55.619161  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:43:08.972420  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:43:13.699430  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kindnet-589414/client.crt: no such file or directory
E1031 18:43:21.032425  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:43:29.135264  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:43:37.325833  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
E1031 18:43:49.933627  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
E1031 18:44:17.539755  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:44:25.778897  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:44:34.097655  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
E1031 18:44:42.952828  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235459 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (5m54.592449388s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmpfr" [3752cc5f-4c5b-49ab-a3ea-e2bde7e6b8df] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1031 18:44:52.180908  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/skaffold-897077/client.crt: no such file or directory
E1031 18:44:53.465036  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/calico-589414/client.crt: no such file or directory
E1031 18:44:56.613209  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/ingress-addon-legacy-243872/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmpfr" [3752cc5f-4c5b-49ab-a3ea-e2bde7e6b8df] Running
E1031 18:44:58.782295  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
E1031 18:44:59.912463  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/gvisor-958664/client.crt: no such file or directory
E1031 18:45:01.783046  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/custom-flannel-589414/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.021672537s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmpfr" [3752cc5f-4c5b-49ab-a3ea-e2bde7e6b8df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022903649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-799191 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xtlbg" [221c3441-faf9-47f9-ac36-886dd78182b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xtlbg" [221c3441-faf9-47f9-ac36-886dd78182b2] Running
E1031 18:45:23.266864  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/addons-164380/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.025138081s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-799191 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-799191 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799191 -n no-preload-799191
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799191 -n no-preload-799191: exit status 2 (318.534625ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799191 -n no-preload-799191
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799191 -n no-preload-799191: exit status 2 (294.495506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-799191 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-799191 -n no-preload-799191
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-799191 -n no-preload-799191
E1031 18:45:11.854547  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-556434 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-556434 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m14.111255143s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xtlbg" [221c3441-faf9-47f9-ac36-886dd78182b2] Running
E1031 18:45:26.465265  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/false-589414/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.066128961s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-189930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-189930 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-189930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189930 -n embed-certs-189930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189930 -n embed-certs-189930: exit status 2 (309.640809ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189930 -n embed-certs-189930
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189930 -n embed-certs-189930: exit status 2 (395.212526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-189930 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189930 -n embed-certs-189930
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189930 -n embed-certs-189930
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (23.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhhf2" [80fdf7d1-489a-4645-a72f-a8e2e78c7db4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1031 18:45:53.481334  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhhf2" [80fdf7d1-489a-4645-a72f-a8e2e78c7db4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.030925682s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (23.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qhhf2" [80fdf7d1-489a-4645-a72f-a8e2e78c7db4] Running
E1031 18:46:15.983260  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/functional-002320/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013992108s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-235459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-235459 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-235459 --alsologtostderr -v=1
E1031 18:46:21.166360  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/enable-default-cni-589414/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459: exit status 2 (323.180884ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459: exit status 2 (292.190976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-235459 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235459 -n default-k8s-diff-port-235459
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-556434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-556434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006995046s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-556434 --alsologtostderr -v=3
E1031 18:46:33.695080  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-556434 --alsologtostderr -v=3: (13.13896955s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-556434 -n newest-cni-556434
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-556434 -n newest-cni-556434: exit status 7 (87.321941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-556434 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-556434 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1031 18:46:59.111012  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/bridge-589414/client.crt: no such file or directory
E1031 18:47:01.380511  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/flannel-589414/client.crt: no such file or directory
E1031 18:47:12.550926  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/auto-589414/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-556434 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (46.956884845s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-556434 -n newest-cni-556434
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2t7z6" [ecc68842-0ada-4ec4-b84e-03463aa49429] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017835587s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2t7z6" [ecc68842-0ada-4ec4-b84e-03463aa49429] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016187572s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-976044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-976044 --alsologtostderr -v=1
E1031 18:47:28.009585  250411 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17530-243226/.minikube/profiles/kubenet-589414/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-976044 -n old-k8s-version-976044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-976044 -n old-k8s-version-976044: exit status 2 (323.091501ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-976044 -n old-k8s-version-976044
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-976044 -n old-k8s-version-976044: exit status 2 (296.589632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-976044 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-976044 -n old-k8s-version-976044
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-976044 -n old-k8s-version-976044
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-556434 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-556434 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-556434 -n newest-cni-556434
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-556434 -n newest-cni-556434: exit status 2 (648.923704ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-556434 -n newest-cni-556434
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-556434 -n newest-cni-556434: exit status 2 (291.161568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-556434 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-556434 -n newest-cni-556434
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-556434 -n newest-cni-556434
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-589414 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-589414" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-589414

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-589414" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-589414"

                                                
                                                
----------------------- debugLogs end: cilium-589414 [took: 3.886365671s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-589414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-589414
--- SKIP: TestNetworkPlugins/group/cilium (4.05s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-949883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-949883
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard