Test Report: Hyperkit_macOS 19046

                    
                      fb148a11d8032b35b0d9cd6893af3c5921ed4428:2024-06-10:34835
                    
                

Test fail (15/327)

x
+
TestAddons/Setup (76.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-992000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-992000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (1m16.468617101s)

                                                
                                                
-- stdout --
	* [addons-992000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "addons-992000" primary control-plane node in "addons-992000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 18:52:40.201541    6599 out.go:291] Setting OutFile to fd 1 ...
	I0610 18:52:40.202253    6599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:40.202260    6599 out.go:304] Setting ErrFile to fd 2...
	I0610 18:52:40.202264    6599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:40.202809    6599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 18:52:40.204355    6599 out.go:298] Setting JSON to false
	I0610 18:52:40.226543    6599 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22916,"bootTime":1718047844,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 18:52:40.226643    6599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 18:52:40.248627    6599 out.go:177] * [addons-992000] minikube v1.33.1 on Darwin 14.4.1
	I0610 18:52:40.290734    6599 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 18:52:40.290794    6599 notify.go:220] Checking for updates...
	I0610 18:52:40.333477    6599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 18:52:40.354657    6599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 18:52:40.375577    6599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 18:52:40.396502    6599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 18:52:40.417705    6599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 18:52:40.438474    6599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 18:52:40.467497    6599 out.go:177] * Using the hyperkit driver based on user configuration
	I0610 18:52:40.509236    6599 start.go:297] selected driver: hyperkit
	I0610 18:52:40.509263    6599 start.go:901] validating driver "hyperkit" against <nil>
	I0610 18:52:40.509284    6599 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 18:52:40.513639    6599 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:40.513784    6599 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 18:52:40.522605    6599 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 18:52:40.526677    6599 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 18:52:40.526713    6599 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 18:52:40.526764    6599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 18:52:40.526979    6599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 18:52:40.527033    6599 cni.go:84] Creating CNI manager for ""
	I0610 18:52:40.527068    6599 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 18:52:40.527076    6599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 18:52:40.527192    6599 start.go:340] cluster config:
	{Name:addons-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 18:52:40.527280    6599 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:40.548639    6599 out.go:177] * Starting "addons-992000" primary control-plane node in "addons-992000" cluster
	I0610 18:52:40.569655    6599 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 18:52:40.569724    6599 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 18:52:40.569765    6599 cache.go:56] Caching tarball of preloaded images
	I0610 18:52:40.570012    6599 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 18:52:40.570034    6599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 18:52:40.570535    6599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/addons-992000/config.json ...
	I0610 18:52:40.570582    6599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/addons-992000/config.json: {Name:mk5bfb42a6bae624e6af132a397072fa37536eb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 18:52:40.571912    6599 start.go:360] acquireMachinesLock for addons-992000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 18:52:40.572153    6599 start.go:364] duration metric: took 210.585µs to acquireMachinesLock for "addons-992000"
	I0610 18:52:40.572226    6599 start.go:93] Provisioning new machine with config: &{Name:addons-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:addons-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 18:52:40.572297    6599 start.go:125] createHost starting for "" (driver="hyperkit")
	I0610 18:52:40.593462    6599 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 18:52:40.593805    6599 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 18:52:40.593861    6599 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 18:52:40.603836    6599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50370
	I0610 18:52:40.604186    6599 main.go:141] libmachine: () Calling .GetVersion
	I0610 18:52:40.604591    6599 main.go:141] libmachine: Using API Version  1
	I0610 18:52:40.604600    6599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 18:52:40.604819    6599 main.go:141] libmachine: () Calling .GetMachineName
	I0610 18:52:40.604937    6599 main.go:141] libmachine: (addons-992000) Calling .GetMachineName
	I0610 18:52:40.605039    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:40.605167    6599 start.go:159] libmachine.API.Create for "addons-992000" (driver="hyperkit")
	I0610 18:52:40.605202    6599 client.go:168] LocalClient.Create starting
	I0610 18:52:40.605262    6599 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem
	I0610 18:52:40.665197    6599 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem
	I0610 18:52:40.765492    6599 main.go:141] libmachine: Running pre-create checks...
	I0610 18:52:40.765501    6599 main.go:141] libmachine: (addons-992000) Calling .PreCreateCheck
	I0610 18:52:40.765681    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:40.765844    6599 main.go:141] libmachine: (addons-992000) Calling .GetConfigRaw
	I0610 18:52:40.766358    6599 main.go:141] libmachine: Creating machine...
	I0610 18:52:40.766373    6599 main.go:141] libmachine: (addons-992000) Calling .Create
	I0610 18:52:40.766530    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:40.766696    6599 main.go:141] libmachine: (addons-992000) DBG | I0610 18:52:40.766494    6607 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 18:52:40.766763    6599 main.go:141] libmachine: (addons-992000) Downloading /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-5942/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 18:52:40.957305    6599 main.go:141] libmachine: (addons-992000) DBG | I0610 18:52:40.957155    6607 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/id_rsa...
	I0610 18:52:40.991481    6599 main.go:141] libmachine: (addons-992000) DBG | I0610 18:52:40.991382    6607 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/addons-992000.rawdisk...
	I0610 18:52:40.991492    6599 main.go:141] libmachine: (addons-992000) DBG | Writing magic tar header
	I0610 18:52:40.991500    6599 main.go:141] libmachine: (addons-992000) DBG | Writing SSH key tar header
	I0610 18:52:40.991972    6599 main.go:141] libmachine: (addons-992000) DBG | I0610 18:52:40.991934    6607 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000 ...
	I0610 18:52:41.363715    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:41.363732    6599 main.go:141] libmachine: (addons-992000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/hyperkit.pid
	I0610 18:52:41.363760    6599 main.go:141] libmachine: (addons-992000) DBG | Using UUID 2a1ff4bd-47e8-42ab-9656-3e3baac47914
	I0610 18:52:41.610642    6599 main.go:141] libmachine: (addons-992000) DBG | Generated MAC 9a:f8:ad:2:8c:c7
	I0610 18:52:41.610677    6599 main.go:141] libmachine: (addons-992000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-992000
	I0610 18:52:41.610730    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2a1ff4bd-47e8-42ab-9656-3e3baac47914", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0610 18:52:41.610765    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2a1ff4bd-47e8-42ab-9656-3e3baac47914", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0610 18:52:41.610851    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2a1ff4bd-47e8-42ab-9656-3e3baac47914", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/addons-992000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machine
s/addons-992000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-992000"}
	I0610 18:52:41.610891    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2a1ff4bd-47e8-42ab-9656-3e3baac47914 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/addons-992000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-992000"
	I0610 18:52:41.610908    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 18:52:41.613924    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 DEBUG: hyperkit: Pid is 6612
	I0610 18:52:41.614371    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 0
	I0610 18:52:41.614391    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:41.614442    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:41.615332    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:41.615381    6599 main.go:141] libmachine: (addons-992000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0610 18:52:41.615412    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 18:52:41.615427    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 18:52:41.615435    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 18:52:41.615443    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 18:52:41.621393    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 18:52:41.674957    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 18:52:41.675600    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 18:52:41.675617    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 18:52:41.675625    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 18:52:41.675631    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 18:52:42.205326    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 18:52:42.205341    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 18:52:42.321771    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 18:52:42.321803    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 18:52:42.321838    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 18:52:42.321855    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 18:52:42.322713    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 18:52:42.322727    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:42 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 18:52:43.616595    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 1
	I0610 18:52:43.616630    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:43.616788    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:43.617702    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:43.617776    6599 main.go:141] libmachine: (addons-992000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0610 18:52:43.617800    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 18:52:43.617810    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 18:52:43.617848    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 18:52:43.617860    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 18:52:45.618595    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 2
	I0610 18:52:45.618621    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:45.618794    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:45.619889    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:45.619958    6599 main.go:141] libmachine: (addons-992000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0610 18:52:45.619987    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 18:52:45.620031    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 18:52:45.620053    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 18:52:45.620088    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 18:52:47.621581    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 3
	I0610 18:52:47.621599    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:47.621725    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:47.622578    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:47.622591    6599 main.go:141] libmachine: (addons-992000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0610 18:52:47.622597    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 18:52:47.622603    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 18:52:47.622609    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 18:52:47.622616    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 18:52:47.649060    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:47 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0610 18:52:47.649106    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:47 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0610 18:52:47.649118    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:47 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0610 18:52:47.671649    6599 main.go:141] libmachine: (addons-992000) DBG | 2024/06/10 18:52:47 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0610 18:52:49.623243    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 4
	I0610 18:52:49.623260    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:49.623347    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:49.624171    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:49.624236    6599 main.go:141] libmachine: (addons-992000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I0610 18:52:49.624249    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 18:52:49.624258    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 18:52:49.624277    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 18:52:49.624294    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 18:52:51.625049    6599 main.go:141] libmachine: (addons-992000) DBG | Attempt 5
	I0610 18:52:51.625121    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:51.625244    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:51.626153    6599 main.go:141] libmachine: (addons-992000) DBG | Searching for 9a:f8:ad:2:8c:c7 in /var/db/dhcpd_leases ...
	I0610 18:52:51.626241    6599 main.go:141] libmachine: (addons-992000) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I0610 18:52:51.626264    6599 main.go:141] libmachine: (addons-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 18:52:51.626282    6599 main.go:141] libmachine: (addons-992000) DBG | Found match: 9a:f8:ad:2:8c:c7
	I0610 18:52:51.626294    6599 main.go:141] libmachine: (addons-992000) DBG | IP: 192.169.0.6
	I0610 18:52:51.626395    6599 main.go:141] libmachine: (addons-992000) Calling .GetConfigRaw
	I0610 18:52:51.627021    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:51.627170    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:51.627328    6599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 18:52:51.627357    6599 main.go:141] libmachine: (addons-992000) Calling .GetState
	I0610 18:52:51.627511    6599 main.go:141] libmachine: (addons-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 18:52:51.627585    6599 main.go:141] libmachine: (addons-992000) DBG | hyperkit pid from json: 6612
	I0610 18:52:51.628559    6599 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 18:52:51.628592    6599 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 18:52:51.628613    6599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 18:52:51.628619    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:51.628807    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:51.628943    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:51.629115    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:51.629293    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:51.629957    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:51.630293    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:51.630316    6599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 18:52:52.689382    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 18:52:52.689395    6599 main.go:141] libmachine: Detecting the provisioner...
	I0610 18:52:52.689400    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:52.689551    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:52.689657    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.689761    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.689845    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:52.689973    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:52.690126    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:52.690134    6599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 18:52:52.749026    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 18:52:52.749095    6599 main.go:141] libmachine: found compatible host: buildroot
	I0610 18:52:52.749101    6599 main.go:141] libmachine: Provisioning with buildroot...
	I0610 18:52:52.749106    6599 main.go:141] libmachine: (addons-992000) Calling .GetMachineName
	I0610 18:52:52.749232    6599 buildroot.go:166] provisioning hostname "addons-992000"
	I0610 18:52:52.749243    6599 main.go:141] libmachine: (addons-992000) Calling .GetMachineName
	I0610 18:52:52.749340    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:52.749445    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:52.749541    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.749651    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.749748    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:52.749873    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:52.750008    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:52.750016    6599 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-992000 && echo "addons-992000" | sudo tee /etc/hostname
	I0610 18:52:52.818209    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-992000
	
	I0610 18:52:52.818228    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:52.818352    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:52.818449    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.818528    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.818621    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:52.818759    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:52.818908    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:52.818920    6599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-992000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-992000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-992000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 18:52:52.885382    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 18:52:52.885404    6599 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 18:52:52.885424    6599 buildroot.go:174] setting up certificates
	I0610 18:52:52.885432    6599 provision.go:84] configureAuth start
	I0610 18:52:52.885439    6599 main.go:141] libmachine: (addons-992000) Calling .GetMachineName
	I0610 18:52:52.885576    6599 main.go:141] libmachine: (addons-992000) Calling .GetIP
	I0610 18:52:52.885670    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:52.885760    6599 provision.go:143] copyHostCerts
	I0610 18:52:52.885853    6599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 18:52:52.886118    6599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 18:52:52.886309    6599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 18:52:52.886466    6599 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.addons-992000 san=[127.0.0.1 192.169.0.6 addons-992000 localhost minikube]
	I0610 18:52:52.979109    6599 provision.go:177] copyRemoteCerts
	I0610 18:52:52.979166    6599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 18:52:52.979183    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:52.979362    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:52.979538    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:52.979633    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:52.979807    6599 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/id_rsa Username:docker}
	I0610 18:52:53.016558    6599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 18:52:53.036613    6599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 18:52:53.056107    6599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 18:52:53.075539    6599 provision.go:87] duration metric: took 190.090771ms to configureAuth
	I0610 18:52:53.075551    6599 buildroot.go:189] setting minikube options for container-runtime
	I0610 18:52:53.075683    6599 config.go:182] Loaded profile config "addons-992000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 18:52:53.075696    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:53.075842    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:53.075923    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:53.076012    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.076098    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.076190    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:53.076305    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:53.076431    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:53.076439    6599 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 18:52:53.133661    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 18:52:53.133675    6599 buildroot.go:70] root file system type: tmpfs
	I0610 18:52:53.133754    6599 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 18:52:53.133771    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:53.133908    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:53.134021    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.134127    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.134219    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:53.134360    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:53.134502    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:53.134556    6599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 18:52:53.203985    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 18:52:53.204008    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:53.204144    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:53.204247    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.204347    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:53.204438    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:53.204562    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:53.204706    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:53.204722    6599 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 18:52:54.718513    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 18:52:54.718543    6599 main.go:141] libmachine: Checking connection to Docker...
	I0610 18:52:54.718555    6599 main.go:141] libmachine: (addons-992000) Calling .GetURL
	I0610 18:52:54.718805    6599 main.go:141] libmachine: Docker is up and running!
	I0610 18:52:54.718829    6599 main.go:141] libmachine: Reticulating splines...
	I0610 18:52:54.718834    6599 client.go:171] duration metric: took 14.113404685s to LocalClient.Create
	I0610 18:52:54.718845    6599 start.go:167] duration metric: took 14.11345827s to libmachine.API.Create "addons-992000"
	I0610 18:52:54.718883    6599 start.go:293] postStartSetup for "addons-992000" (driver="hyperkit")
	I0610 18:52:54.718901    6599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 18:52:54.718911    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:54.719172    6599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 18:52:54.719190    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:54.719402    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:54.719560    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:54.719640    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:54.719816    6599 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/id_rsa Username:docker}
	I0610 18:52:54.760540    6599 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 18:52:54.764321    6599 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 18:52:54.764332    6599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 18:52:54.764431    6599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 18:52:54.764479    6599 start.go:296] duration metric: took 45.588863ms for postStartSetup
	I0610 18:52:54.764500    6599 main.go:141] libmachine: (addons-992000) Calling .GetConfigRaw
	I0610 18:52:54.765174    6599 main.go:141] libmachine: (addons-992000) Calling .GetIP
	I0610 18:52:54.765322    6599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/addons-992000/config.json ...
	I0610 18:52:54.765695    6599 start.go:128] duration metric: took 14.193163931s to createHost
	I0610 18:52:54.765712    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:54.765828    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:54.765910    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:54.766010    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:54.766083    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:54.766190    6599 main.go:141] libmachine: Using SSH client type: native
	I0610 18:52:54.766322    6599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7cc2f00] 0x7cc5c60 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0610 18:52:54.766329    6599 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 18:52:54.827047    6599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718070774.595339328
	
	I0610 18:52:54.827058    6599 fix.go:216] guest clock: 1718070774.595339328
	I0610 18:52:54.827063    6599 fix.go:229] Guest: 2024-06-10 18:52:54.595339328 -0700 PDT Remote: 2024-06-10 18:52:54.765703 -0700 PDT m=+14.599132954 (delta=-170.363672ms)
	I0610 18:52:54.827079    6599 fix.go:200] guest clock delta is within tolerance: -170.363672ms
	I0610 18:52:54.827082    6599 start.go:83] releasing machines lock for "addons-992000", held for 14.254691141s
	I0610 18:52:54.827109    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:54.827233    6599 main.go:141] libmachine: (addons-992000) Calling .GetIP
	I0610 18:52:54.827341    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:54.827671    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:54.827783    6599 main.go:141] libmachine: (addons-992000) Calling .DriverName
	I0610 18:52:54.827890    6599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 18:52:54.827922    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:54.827927    6599 ssh_runner.go:195] Run: cat /version.json
	I0610 18:52:54.827936    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHHostname
	I0610 18:52:54.828048    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:54.828067    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHPort
	I0610 18:52:54.828156    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:54.828175    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHKeyPath
	I0610 18:52:54.828236    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:54.828280    6599 main.go:141] libmachine: (addons-992000) Calling .GetSSHUsername
	I0610 18:52:54.828358    6599 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/id_rsa Username:docker}
	I0610 18:52:54.828365    6599 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/addons-992000/id_rsa Username:docker}
	I0610 18:52:54.914694    6599 ssh_runner.go:195] Run: systemctl --version
	I0610 18:52:54.919992    6599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 18:52:54.924237    6599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 18:52:54.924284    6599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 18:52:54.937347    6599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 18:52:54.937363    6599 start.go:494] detecting cgroup driver to use...
	I0610 18:52:54.937462    6599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 18:52:54.952139    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 18:52:54.960890    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 18:52:54.969553    6599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 18:52:54.969590    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 18:52:54.978313    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 18:52:54.987065    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 18:52:54.995826    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 18:52:55.004623    6599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 18:52:55.013539    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 18:52:55.022350    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 18:52:55.031309    6599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 18:52:55.040270    6599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 18:52:55.048241    6599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 18:52:55.056300    6599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 18:52:55.164049    6599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 18:52:55.183586    6599 start.go:494] detecting cgroup driver to use...
	I0610 18:52:55.183749    6599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 18:52:55.198768    6599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 18:52:55.210072    6599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 18:52:55.242123    6599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 18:52:55.252823    6599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 18:52:55.263296    6599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 18:52:55.284426    6599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 18:52:55.294874    6599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 18:52:55.310146    6599 ssh_runner.go:195] Run: which cri-dockerd
	I0610 18:52:55.312924    6599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 18:52:55.320089    6599 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 18:52:55.333491    6599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 18:52:55.429516    6599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 18:52:55.524668    6599 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 18:52:55.524753    6599 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 18:52:55.538490    6599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 18:52:55.639322    6599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 18:53:56.450879    6599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.810577714s)
	I0610 18:53:56.450941    6599 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 18:53:56.485974    6599 out.go:177] 
	W0610 18:53:56.507860    6599 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 01:52:53 addons-992000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.290178013Z" level=info msg="Starting up"
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.290694596Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.293560650Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=531
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.310485968Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330468233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330529371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330591855Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330626117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330700865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330743819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330888495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330929290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330962441Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330996869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.331079059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.331252459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.332824907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.332877443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333011655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333054598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333141633Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333205733Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333240262Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.335906036Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.335977708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336048089Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336093839Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336130262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336220151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336396255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336496422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336534526Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336620323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336658506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336689808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336729204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336769074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336801781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336838995Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336872239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336918495Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336958586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336991920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337024949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337055511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337084639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337113828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337151989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337185175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337219384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337256552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337289202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337318946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337348224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337414618Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337463257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337495308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337524248Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337578068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337640082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337680855Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337711812Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337741954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337771568Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337800282Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337965599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338056998Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338123980Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338209604Z" level=info msg="containerd successfully booted in 0.028420s"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.320838224Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.327912320Z" level=info msg="Loading containers: start."
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.442369125Z" level=info msg="Loading containers: done."
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.450757855Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.450899239Z" level=info msg="Daemon has completed initialization"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.483379531Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.483494845Z" level=info msg="API listen on [::]:2376"
	Jun 11 01:52:54 addons-992000 systemd[1]: Started Docker Application Container Engine.
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.420886737Z" level=info msg="Processing signal 'terminated'"
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421751261Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421940209Z" level=info msg="Daemon shutdown complete"
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421989265Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.422002534Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 01:52:55 addons-992000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 01:52:56 addons-992000 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 01:52:56 addons-992000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 01:52:56 addons-992000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 01:52:56 addons-992000 dockerd[861]: time="2024-06-11T01:52:56.478266805Z" level=info msg="Starting up"
	Jun 11 01:53:56 addons-992000 dockerd[861]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 01:53:56 addons-992000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 01:53:56 addons-992000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 01:53:56 addons-992000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 01:52:53 addons-992000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.290178013Z" level=info msg="Starting up"
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.290694596Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 01:52:53 addons-992000 dockerd[523]: time="2024-06-11T01:52:53.293560650Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=531
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.310485968Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330468233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330529371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330591855Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330626117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330700865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330743819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330888495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330929290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330962441Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.330996869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.331079059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.331252459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.332824907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.332877443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333011655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333054598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333141633Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333205733Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.333240262Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.335906036Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.335977708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336048089Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336093839Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336130262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336220151Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336396255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336496422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336534526Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336620323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336658506Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336689808Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336729204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336769074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336801781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336838995Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336872239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336918495Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336958586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.336991920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337024949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337055511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337084639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337113828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337151989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337185175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337219384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337256552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337289202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337318946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337348224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337414618Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337463257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337495308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337524248Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337578068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337640082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337680855Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337711812Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337741954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337771568Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337800282Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.337965599Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338056998Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338123980Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 01:52:53 addons-992000 dockerd[531]: time="2024-06-11T01:52:53.338209604Z" level=info msg="containerd successfully booted in 0.028420s"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.320838224Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.327912320Z" level=info msg="Loading containers: start."
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.442369125Z" level=info msg="Loading containers: done."
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.450757855Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.450899239Z" level=info msg="Daemon has completed initialization"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.483379531Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 01:52:54 addons-992000 dockerd[523]: time="2024-06-11T01:52:54.483494845Z" level=info msg="API listen on [::]:2376"
	Jun 11 01:52:54 addons-992000 systemd[1]: Started Docker Application Container Engine.
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.420886737Z" level=info msg="Processing signal 'terminated'"
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421751261Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421940209Z" level=info msg="Daemon shutdown complete"
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.421989265Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 01:52:55 addons-992000 dockerd[523]: time="2024-06-11T01:52:55.422002534Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 01:52:55 addons-992000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 01:52:56 addons-992000 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 01:52:56 addons-992000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 01:52:56 addons-992000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 01:52:56 addons-992000 dockerd[861]: time="2024-06-11T01:52:56.478266805Z" level=info msg="Starting up"
	Jun 11 01:53:56 addons-992000 dockerd[861]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 01:53:56 addons-992000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 01:53:56 addons-992000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 01:53:56 addons-992000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 18:53:56.507971    6599 out.go:239] * 
	* 
	W0610 18:53:56.509180    6599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 18:53:56.593735    6599 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-amd64 start -p addons-992000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (76.48s)

                                                
                                    
x
+
TestCertOptions (82.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-947000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-947000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 90 (1m17.094562705s)

                                                
                                                
-- stdout --
	* [cert-options-947000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-947000" primary control-plane node in "cert-options-947000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 03:11:30 cert-options-947000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:11:30 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:30.385279267Z" level=info msg="Starting up"
	Jun 11 03:11:30 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:30.385925530Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 03:11:30 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:30.391689860Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=526
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.408639601Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426177120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426237768Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426332718Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426354022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426436541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426477302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426703918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426748789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426767172Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426776469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.426852520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.427082085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429274642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429342298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429515057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429632803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429757343Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429863481Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.429905106Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.432736276Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.432830264Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.432887125Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433004959Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433054120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433170729Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433372425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433544216Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433592119Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433633166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433688537Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433724234Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433757904Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433791421Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433826178Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433859306Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433893845Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433929956Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.433968191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434003361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434039336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434072938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434106079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434141939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434174127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434207152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434240413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434274699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434306709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434339045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434409218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434452319Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434491302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434525473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434560908Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434641545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434689115Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434808245Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434849425Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434881714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434914616Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.434945961Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.435161519Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.435255890Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.435374240Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 03:11:30 cert-options-947000 dockerd[526]: time="2024-06-11T03:11:30.435450471Z" level=info msg="containerd successfully booted in 0.028066s"
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.416596993Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.424125489Z" level=info msg="Loading containers: start."
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.530299282Z" level=info msg="Loading containers: done."
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.538159477Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.538286605Z" level=info msg="Daemon has completed initialization"
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.570992396Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 03:11:31 cert-options-947000 systemd[1]: Started Docker Application Container Engine.
	Jun 11 03:11:31 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:31.571196448Z" level=info msg="API listen on [::]:2376"
	Jun 11 03:11:32 cert-options-947000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 03:11:32 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:32.566633728Z" level=info msg="Processing signal 'terminated'"
	Jun 11 03:11:32 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:32.567717986Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 03:11:32 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:32.568045136Z" level=info msg="Daemon shutdown complete"
	Jun 11 03:11:32 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:32.568098000Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 03:11:32 cert-options-947000 dockerd[520]: time="2024-06-11T03:11:32.568134154Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 03:11:33 cert-options-947000 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 03:11:33 cert-options-947000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 03:11:33 cert-options-947000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:11:33 cert-options-947000 dockerd[811]: time="2024-06-11T03:11:33.612908956Z" level=info msg="Starting up"
	Jun 11 03:12:33 cert-options-947000 dockerd[811]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 03:12:33 cert-options-947000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 03:12:33 cert-options-947000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 03:12:33 cert-options-947000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-947000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 90
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-947000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-947000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 1 (130.866047ms)

                                                
                                                
-- stdout --
	Can't open /var/lib/minikube/certs/apiserver.crt for reading, No such file or directory
	139919920316480:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:69:fopen('/var/lib/minikube/certs/apiserver.crt','r')
	139919920316480:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:76:
	unable to load certificate

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-947000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 1
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-947000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 10 Jun 2024 20:11:33 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.1\n\t      name: cluster_info\n\t    server: https://192.169.0.30:8443\n\t  name: cert-expiration-918000\n\tcontexts:\n\t- context:\n\t    cluster: cert-expiration-918000\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 10 Jun 2024 20:11:33 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.1\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-expiration-918000\n\t  name: cert-expiration-918000\n\tcurrent-context: cert-expiration-918000\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-expiration-918000\n\t  user:\n\t    client-certificate: /Users/jenkins/minikube-integration/19046-5942/.minikube/
profiles/cert-expiration-918000/client.crt\n\t    client-key: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/cert-expiration-918000/client.key\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-947000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-947000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 1 (132.487914ms)

                                                
                                                
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-947000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 1
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-06-10 20:12:33.778872 -0700 PDT m=+4827.849713895
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-947000 -n cert-options-947000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-947000 -n cert-options-947000: exit status 6 (151.993724ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:12:33.918440   11129 status.go:417] kubeconfig endpoint: get endpoint: "cert-options-947000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "cert-options-947000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "cert-options-947000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-947000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-947000: (5.293168491s)
--- FAIL: TestCertOptions (82.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (227.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.496015358s)
ha_test.go:413: expected profile "ha-868000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-868000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-868000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-868000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.10\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.12\",\"Port\":0,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":
false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,
\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000
E0610 19:15:36.282393    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000: exit status 3 (1m15.091784546s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 19:16:27.331309    8477 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0610 19:16:27.331330    8477 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-868000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (227.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (227.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.490392496s)
ha_test.go:413: expected profile "ha-868000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-868000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-868000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-868000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.10\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.12\",\"Port\":0,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":
false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,
\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000
E0610 19:25:36.261464    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000: exit status 3 (1m15.093735318s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 19:26:09.471327    8816 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0610 19:26:09.471352    8816 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-868000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (227.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (225.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-868000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-868000 --control-plane -v=7 --alsologtostderr: exit status 103 (2m30.166286069s)

                                                
                                                
-- stdout --
	* The control-plane node ha-868000-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-868000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:26:09.528898    8853 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:26:09.529556    8853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:26:09.529563    8853 out.go:304] Setting ErrFile to fd 2...
	I0610 19:26:09.529567    8853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:26:09.529765    8853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:26:09.530126    8853 mustload.go:65] Loading cluster: ha-868000
	I0610 19:26:09.530455    8853 config.go:182] Loaded profile config "ha-868000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:26:09.530839    8853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:26:09.530880    8853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:26:09.539355    8853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52590
	I0610 19:26:09.539742    8853 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:26:09.540159    8853 main.go:141] libmachine: Using API Version  1
	I0610 19:26:09.540194    8853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:26:09.540409    8853 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:26:09.540528    8853 main.go:141] libmachine: (ha-868000) Calling .GetState
	I0610 19:26:09.540608    8853 main.go:141] libmachine: (ha-868000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:26:09.540685    8853 main.go:141] libmachine: (ha-868000) DBG | hyperkit pid from json: 8647
	I0610 19:26:09.541697    8853 host.go:66] Checking if "ha-868000" exists ...
	I0610 19:26:09.541934    8853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:26:09.541953    8853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:26:09.550693    8853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52592
	I0610 19:26:09.551038    8853 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:26:09.551392    8853 main.go:141] libmachine: Using API Version  1
	I0610 19:26:09.551408    8853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:26:09.551614    8853 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:26:09.551725    8853 main.go:141] libmachine: (ha-868000) Calling .DriverName
	I0610 19:26:09.552059    8853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:26:09.552088    8853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:26:09.560556    8853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52594
	I0610 19:26:09.560905    8853 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:26:09.561223    8853 main.go:141] libmachine: Using API Version  1
	I0610 19:26:09.561234    8853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:26:09.561460    8853 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:26:09.561570    8853 main.go:141] libmachine: (ha-868000-m02) Calling .GetState
	I0610 19:26:09.561647    8853 main.go:141] libmachine: (ha-868000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:26:09.561736    8853 main.go:141] libmachine: (ha-868000-m02) DBG | hyperkit pid from json: 8661
	I0610 19:26:09.562777    8853 host.go:66] Checking if "ha-868000-m02" exists ...
	I0610 19:26:09.563040    8853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:26:09.563066    8853 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:26:09.571484    8853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52596
	I0610 19:26:09.571810    8853 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:26:09.572173    8853 main.go:141] libmachine: Using API Version  1
	I0610 19:26:09.572192    8853 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:26:09.572410    8853 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:26:09.572519    8853 main.go:141] libmachine: (ha-868000-m02) Calling .DriverName
	I0610 19:26:09.572624    8853 api_server.go:166] Checking apiserver status ...
	I0610 19:26:09.572683    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:26:09.572706    8853 main.go:141] libmachine: (ha-868000) Calling .GetSSHHostname
	I0610 19:26:09.572808    8853 main.go:141] libmachine: (ha-868000) Calling .GetSSHPort
	I0610 19:26:09.572887    8853 main.go:141] libmachine: (ha-868000) Calling .GetSSHKeyPath
	I0610 19:26:09.572969    8853 main.go:141] libmachine: (ha-868000) Calling .GetSSHUsername
	I0610 19:26:09.573046    8853 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/ha-868000/id_rsa Username:docker}
	W0610 19:27:24.571089    8853 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.9:22: connect: operation timed out
	W0610 19:27:24.571204    8853 api_server.go:170] stopped: unable to get apiserver pid: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	W0610 19:27:24.571515    8853 out.go:239] ! The control-plane node ha-868000 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-868000 apiserver is not running (will try others): (state=Stopped)
	I0610 19:27:24.571530    8853 api_server.go:166] Checking apiserver status ...
	I0610 19:27:24.571597    8853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:27:24.571619    8853 main.go:141] libmachine: (ha-868000-m02) Calling .GetSSHHostname
	I0610 19:27:24.571831    8853 main.go:141] libmachine: (ha-868000-m02) Calling .GetSSHPort
	I0610 19:27:24.572016    8853 main.go:141] libmachine: (ha-868000-m02) Calling .GetSSHKeyPath
	I0610 19:27:24.572197    8853 main.go:141] libmachine: (ha-868000-m02) Calling .GetSSHUsername
	I0610 19:27:24.572367    8853 sshutil.go:53] new ssh client: &{IP:192.169.0.10 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/ha-868000-m02/id_rsa Username:docker}
	W0610 19:28:39.569414    8853 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.10:22: connect: operation timed out
	W0610 19:28:39.569502    8853 api_server.go:170] stopped: unable to get apiserver pid: NewSession: new client: new client: dial tcp 192.169.0.10:22: connect: operation timed out
	I0610 19:28:39.591109    8853 out.go:177] * The control-plane node ha-868000-m02 apiserver is not running: (state=Stopped)
	I0610 19:28:39.611965    8853 out.go:177]   To start a cluster, run: "minikube start -p ha-868000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-868000 --control-plane -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000: exit status 3 (1m15.095255672s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 19:29:54.725841    8932 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0610 19:29:54.725858    8932 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-868000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (225.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (190.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0610 19:30:36.252781    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 profile list --output json: signal: killed (1m55.896463957s)
ha_test.go:283: failed to list profiles with json format. args "out/minikube-darwin-amd64 profile list --output json": signal: killed
ha_test.go:289: failed to decode json from profile list: args "out/minikube-darwin-amd64 profile list --output json": unexpected end of JSON input
ha_test.go:302: expected the json of 'profile list' to include "ha-868000" but got *""*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-868000 -n ha-868000: exit status 3 (1m15.092187435s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 19:33:05.707741    9058 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out
	E0610 19:33:05.707765    9058 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.9:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-868000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (190.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (122.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 node start m03 -v=7 --alsologtostderr: exit status 90 (1m17.484068965s)

                                                
                                                
-- stdout --
	* Starting "multinode-353000-m03" worker node in "multinode-353000" cluster
	* Restarting existing hyperkit VM for "multinode-353000-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:43:12.500594    9839 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:43:12.501523    9839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:43:12.501529    9839 out.go:304] Setting ErrFile to fd 2...
	I0610 19:43:12.501533    9839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:43:12.501739    9839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:43:12.502088    9839 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:43:12.502411    9839 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:43:12.502743    9839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.502785    9839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.511133    9839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53511
	I0610 19:43:12.511543    9839 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.511972    9839 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.511991    9839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.512219    9839 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.512330    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:43:12.512423    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.512495    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9620
	I0610 19:43:12.513715    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid 9620 missing from process table
	W0610 19:43:12.513748    9839 host.go:58] "multinode-353000-m03" host status: Stopped
	I0610 19:43:12.535149    9839 out.go:177] * Starting "multinode-353000-m03" worker node in "multinode-353000" cluster
	I0610 19:43:12.556127    9839 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:43:12.556201    9839 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 19:43:12.556225    9839 cache.go:56] Caching tarball of preloaded images
	I0610 19:43:12.556563    9839 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:43:12.556602    9839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:43:12.556815    9839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:43:12.558398    9839 start.go:360] acquireMachinesLock for multinode-353000-m03: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:43:12.558571    9839 start.go:364] duration metric: took 119.009µs to acquireMachinesLock for "multinode-353000-m03"
	I0610 19:43:12.558609    9839 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:43:12.558631    9839 fix.go:54] fixHost starting: m03
	I0610 19:43:12.559072    9839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.559103    9839 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.568029    9839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53513
	I0610 19:43:12.568392    9839 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.568781    9839 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.568802    9839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.569032    9839 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.569155    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:12.569254    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:43:12.569337    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.569426    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9620
	I0610 19:43:12.570630    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid 9620 missing from process table
	I0610 19:43:12.570652    9839 fix.go:112] recreateIfNeeded on multinode-353000-m03: state=Stopped err=<nil>
	I0610 19:43:12.570668    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	W0610 19:43:12.570742    9839 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:43:12.592038    9839 out.go:177] * Restarting existing hyperkit VM for "multinode-353000-m03" ...
	I0610 19:43:12.613146    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .Start
	I0610 19:43:12.613375    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.613405    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid
	I0610 19:43:12.613419    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Using UUID 9ed320a4-4e20-4225-87bc-ec0cd1dc4108
	I0610 19:43:12.631877    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Generated MAC fe:8b:79:f3:b9:7
	I0610 19:43:12.631920    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:43:12.632045    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9ed320a4-4e20-4225-87bc-ec0cd1dc4108", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f1410)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:43:12.632080    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9ed320a4-4e20-4225-87bc-ec0cd1dc4108", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f1410)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:43:12.632152    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9ed320a4-4e20-4225-87bc-ec0cd1dc4108", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/multinode-353000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage,/Users/j
enkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:43:12.632224    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9ed320a4-4e20-4225-87bc-ec0cd1dc4108 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/multinode-353000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/mult
inode-353000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:43:12.632246    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:43:12.633636    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Pid is 9843
	I0610 19:43:12.634130    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Attempt 0
	I0610 19:43:12.634143    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.634220    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:43:12.636295    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Searching for fe:8b:79:f3:b9:7 in /var/db/dhcpd_leases ...
	I0610 19:43:12.636374    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:43:12.636404    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x6667b9bf}
	I0610 19:43:12.636427    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Found match: fe:8b:79:f3:b9:7
	I0610 19:43:12.636453    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | IP: 192.169.0.21
	I0610 19:43:12.636496    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetConfigRaw
	I0610 19:43:12.637207    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:43:12.637456    9839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:43:12.637932    9839 machine.go:94] provisionDockerMachine start ...
	I0610 19:43:12.637943    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:12.638120    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:12.638228    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:12.638329    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:12.638444    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:12.638565    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:12.638699    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:12.638969    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:12.638983    9839 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:43:12.642645    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:43:12.651837    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:43:12.653126    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:43:12.653161    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:43:12.653186    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:43:12.653204    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:43:13.038643    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:43:13.038660    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:43:13.153576    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:43:13.153596    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:43:13.153619    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:43:13.153631    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:43:13.154475    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:43:13.154484    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:43:18.506079    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:43:18.506146    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:43:18.506154    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:43:18.529586    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:43:25.804740    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:43:25.804756    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
	I0610 19:43:25.804926    9839 buildroot.go:166] provisioning hostname "multinode-353000-m03"
	I0610 19:43:25.804938    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
	I0610 19:43:25.805031    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:25.805125    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:25.805220    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:25.805306    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:25.805390    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:25.805511    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:25.805654    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:25.805663    9839 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000-m03 && echo "multinode-353000-m03" | sudo tee /etc/hostname
	I0610 19:43:25.868684    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m03
	
	I0610 19:43:25.868705    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:25.868857    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:25.868948    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:25.869031    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:25.869118    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:25.869271    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:25.869480    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:25.869493    9839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:43:25.928390    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:43:25.928413    9839 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:43:25.928430    9839 buildroot.go:174] setting up certificates
	I0610 19:43:25.928443    9839 provision.go:84] configureAuth start
	I0610 19:43:25.928452    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
	I0610 19:43:25.928606    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:43:25.928718    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:25.928816    9839 provision.go:143] copyHostCerts
	I0610 19:43:25.928846    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:43:25.928917    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:43:25.928925    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:43:25.929059    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:43:25.929256    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:43:25.929296    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:43:25.929301    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:43:25.929391    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:43:25.929535    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:43:25.929574    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:43:25.929578    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:43:25.929669    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:43:25.929823    9839 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m03 san=[127.0.0.1 192.169.0.21 localhost minikube multinode-353000-m03]
	I0610 19:43:26.058889    9839 provision.go:177] copyRemoteCerts
	I0610 19:43:26.058948    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:43:26.058966    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:26.059150    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:26.059345    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.059574    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:26.059755    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:43:26.093665    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:43:26.093756    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:43:26.112700    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:43:26.112765    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:43:26.131814    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:43:26.131879    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 19:43:26.151279    9839 provision.go:87] duration metric: took 222.828616ms to configureAuth
	I0610 19:43:26.151292    9839 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:43:26.151456    9839 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:43:26.151469    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:26.151609    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:26.151692    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:26.151777    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.151874    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.151972    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:26.152074    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:26.152198    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:26.152205    9839 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:43:26.206329    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:43:26.206341    9839 buildroot.go:70] root file system type: tmpfs
	I0610 19:43:26.206411    9839 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:43:26.206426    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:26.206547    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:26.206627    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.206701    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.206779    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:26.206894    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:26.207036    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:26.207079    9839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:43:26.271573    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:43:26.271595    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:26.271735    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:26.271830    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.271923    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:26.272014    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:26.272144    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:26.272283    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:26.272295    9839 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:43:27.824055    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:43:27.824071    9839 machine.go:97] duration metric: took 15.186657886s to provisionDockerMachine
	I0610 19:43:27.824082    9839 start.go:293] postStartSetup for "multinode-353000-m03" (driver="hyperkit")
	I0610 19:43:27.824091    9839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:43:27.824102    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:27.824287    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:43:27.824303    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:27.824414    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:27.824515    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:27.824601    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:27.824685    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:43:27.864127    9839 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:43:27.868546    9839 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:43:27.868563    9839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:43:27.868710    9839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:43:27.868916    9839 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:43:27.868922    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:43:27.869134    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:43:27.880487    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:43:27.915122    9839 start.go:296] duration metric: took 91.029673ms for postStartSetup
	I0610 19:43:27.915147    9839 fix.go:56] duration metric: took 15.357055265s for fixHost
	I0610 19:43:27.915159    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:27.915293    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:27.915401    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:27.915484    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:27.915567    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:27.915677    9839 main.go:141] libmachine: Using SSH client type: native
	I0610 19:43:27.915819    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0610 19:43:27.915832    9839 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 19:43:27.969555    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073808.315530144
	
	I0610 19:43:27.969566    9839 fix.go:216] guest clock: 1718073808.315530144
	I0610 19:43:27.969571    9839 fix.go:229] Guest: 2024-06-10 19:43:28.315530144 -0700 PDT Remote: 2024-06-10 19:43:27.91515 -0700 PDT m=+15.450960394 (delta=400.380144ms)
	I0610 19:43:27.969594    9839 fix.go:200] guest clock delta is within tolerance: 400.380144ms
	I0610 19:43:27.969604    9839 start.go:83] releasing machines lock for "multinode-353000-m03", held for 15.411555485s
	I0610 19:43:27.969624    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:27.969749    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:43:27.969849    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:27.970170    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:27.970269    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:43:27.970351    9839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:43:27.970384    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:27.970425    9839 ssh_runner.go:195] Run: systemctl --version
	I0610 19:43:27.970436    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:43:27.970471    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:27.970506    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:43:27.970548    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:27.970570    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:43:27.970645    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:27.970659    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:43:27.970737    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:43:27.970755    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:43:28.001679    9839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 19:43:28.053397    9839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:43:28.053511    9839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:43:28.068040    9839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:43:28.068055    9839 start.go:494] detecting cgroup driver to use...
	I0610 19:43:28.068157    9839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:43:28.083444    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:43:28.092622    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:43:28.101546    9839 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:43:28.101594    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:43:28.110638    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:43:28.119615    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:43:28.128848    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:43:28.141299    9839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:43:28.151747    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:43:28.160510    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:43:28.168962    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:43:28.177615    9839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:43:28.185433    9839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:43:28.193029    9839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:43:28.290399    9839 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:43:28.309700    9839 start.go:494] detecting cgroup driver to use...
	I0610 19:43:28.309773    9839 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:43:28.326304    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:43:28.339423    9839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:43:28.359836    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:43:28.370954    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:43:28.382062    9839 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:43:28.413605    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:43:28.423953    9839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:43:28.439193    9839 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:43:28.442081    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:43:28.449345    9839 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:43:28.463541    9839 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:43:28.563785    9839 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:43:28.665938    9839 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:43:28.666021    9839 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:43:28.680584    9839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:43:28.773525    9839 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:44:29.808614    9839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.037190692s)
	I0610 19:44:29.808676    9839 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 19:44:29.845338    9839 out.go:177] 
	W0610 19:44:29.866976    9839 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:43:26 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.942866425Z" level=info msg="Starting up"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.943763668Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.944369312Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.963212247Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978801260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978878068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978940997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978976313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979090976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979146579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979281764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979324699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979358968Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979388949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979509469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979673869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981224964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981280787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981417525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981461093Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981614004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981665659Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981698019Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982592626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982648000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982684065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982718166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982750040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982814241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983031848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983121896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983158117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983191363Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983222051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983251729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983281020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983310646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983365464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983424196Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983456426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983490080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983532546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983566917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983597251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983626815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983656366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983688471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983717622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983747001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983776597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983807814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983836828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983866074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983899521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983932007Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983971051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984002474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984033203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984105597Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984151257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984184338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984216206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984244573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984272421Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984300228Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984488222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984552119Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984643939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984685931Z" level=info msg="containerd successfully booted in 0.022307s"
	Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.964317792Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.975720029Z" level=info msg="Loading containers: start."
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.095278957Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.131300004Z" level=info msg="Loading containers: done."
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148511044Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148856577Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167128553Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167290954Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:43:28 multinode-353000-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.132181324Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.133520177Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:43:29 multinode-353000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134062466Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134155736Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134172085Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:43:30 multinode-353000-m03 dockerd[870]: time="2024-06-11T02:43:30.183722279Z" level=info msg="Starting up"
	Jun 11 02:44:30 multinode-353000-m03 dockerd[870]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:43:26 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.942866425Z" level=info msg="Starting up"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.943763668Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.944369312Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.963212247Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978801260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978878068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978940997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978976313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979090976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979146579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979281764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979324699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979358968Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979388949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979509469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979673869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981224964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981280787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981417525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981461093Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981614004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981665659Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981698019Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982592626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982648000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982684065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982718166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982750040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982814241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983031848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983121896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983158117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983191363Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983222051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983251729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983281020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983310646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983365464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983424196Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983456426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983490080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983532546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983566917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983597251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983626815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983656366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983688471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983717622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983747001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983776597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983807814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983836828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983866074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983899521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983932007Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983971051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984002474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984033203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984105597Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984151257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984184338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984216206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984244573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984272421Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984300228Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984488222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984552119Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984643939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984685931Z" level=info msg="containerd successfully booted in 0.022307s"
	Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.964317792Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.975720029Z" level=info msg="Loading containers: start."
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.095278957Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.131300004Z" level=info msg="Loading containers: done."
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148511044Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148856577Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167128553Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167290954Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:43:28 multinode-353000-m03 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.132181324Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.133520177Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:43:29 multinode-353000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134062466Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134155736Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134172085Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:43:30 multinode-353000-m03 dockerd[870]: time="2024-06-11T02:43:30.183722279Z" level=info msg="Starting up"
	Jun 11 02:44:30 multinode-353000-m03 dockerd[870]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:44:30 multinode-353000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 19:44:29.867073    9839 out.go:239] * 
	* 
	W0610 19:44:29.882762    9839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 19:44:29.903866    9839 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0610 19:43:12.500594    9839 out.go:291] Setting OutFile to fd 1 ...
I0610 19:43:12.501523    9839 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:43:12.501529    9839 out.go:304] Setting ErrFile to fd 2...
I0610 19:43:12.501533    9839 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:43:12.501739    9839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:43:12.502088    9839 mustload.go:65] Loading cluster: multinode-353000
I0610 19:43:12.502411    9839 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:43:12.502743    9839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:43:12.502785    9839 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:43:12.511133    9839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53511
I0610 19:43:12.511543    9839 main.go:141] libmachine: () Calling .GetVersion
I0610 19:43:12.511972    9839 main.go:141] libmachine: Using API Version  1
I0610 19:43:12.511991    9839 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:43:12.512219    9839 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:43:12.512330    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
I0610 19:43:12.512423    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:43:12.512495    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9620
I0610 19:43:12.513715    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid 9620 missing from process table
W0610 19:43:12.513748    9839 host.go:58] "multinode-353000-m03" host status: Stopped
I0610 19:43:12.535149    9839 out.go:177] * Starting "multinode-353000-m03" worker node in "multinode-353000" cluster
I0610 19:43:12.556127    9839 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0610 19:43:12.556201    9839 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0610 19:43:12.556225    9839 cache.go:56] Caching tarball of preloaded images
I0610 19:43:12.556563    9839 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0610 19:43:12.556602    9839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0610 19:43:12.556815    9839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
I0610 19:43:12.558398    9839 start.go:360] acquireMachinesLock for multinode-353000-m03: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 19:43:12.558571    9839 start.go:364] duration metric: took 119.009µs to acquireMachinesLock for "multinode-353000-m03"
I0610 19:43:12.558609    9839 start.go:96] Skipping create...Using existing machine configuration
I0610 19:43:12.558631    9839 fix.go:54] fixHost starting: m03
I0610 19:43:12.559072    9839 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:43:12.559103    9839 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:43:12.568029    9839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53513
I0610 19:43:12.568392    9839 main.go:141] libmachine: () Calling .GetVersion
I0610 19:43:12.568781    9839 main.go:141] libmachine: Using API Version  1
I0610 19:43:12.568802    9839 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:43:12.569032    9839 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:43:12.569155    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:12.569254    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
I0610 19:43:12.569337    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:43:12.569426    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9620
I0610 19:43:12.570630    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid 9620 missing from process table
I0610 19:43:12.570652    9839 fix.go:112] recreateIfNeeded on multinode-353000-m03: state=Stopped err=<nil>
I0610 19:43:12.570668    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
W0610 19:43:12.570742    9839 fix.go:138] unexpected machine state, will restart: <nil>
I0610 19:43:12.592038    9839 out.go:177] * Restarting existing hyperkit VM for "multinode-353000-m03" ...
I0610 19:43:12.613146    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .Start
I0610 19:43:12.613375    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:43:12.613405    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid
I0610 19:43:12.613419    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Using UUID 9ed320a4-4e20-4225-87bc-ec0cd1dc4108
I0610 19:43:12.631877    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Generated MAC fe:8b:79:f3:b9:7
I0610 19:43:12.631920    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
I0610 19:43:12.632045    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9ed320a4-4e20-4225-87bc-ec0cd1dc4108", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f1410)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
I0610 19:43:12.632080    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9ed320a4-4e20-4225-87bc-ec0cd1dc4108", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002f1410)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
I0610 19:43:12.632152    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9ed320a4-4e20-4225-87bc-ec0cd1dc4108", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/multinode-353000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage,/Users/je
nkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
I0610 19:43:12.632224    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9ed320a4-4e20-4225-87bc-ec0cd1dc4108 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/multinode-353000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multi
node-353000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
I0610 19:43:12.632246    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0610 19:43:12.633636    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 DEBUG: hyperkit: Pid is 9843
I0610 19:43:12.634130    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Attempt 0
I0610 19:43:12.634143    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:43:12.634220    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
I0610 19:43:12.636295    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Searching for fe:8b:79:f3:b9:7 in /var/db/dhcpd_leases ...
I0610 19:43:12.636374    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Found 20 entries in /var/db/dhcpd_leases!
I0610 19:43:12.636404    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x6667b9bf}
I0610 19:43:12.636427    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | Found match: fe:8b:79:f3:b9:7
I0610 19:43:12.636453    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | IP: 192.169.0.21
I0610 19:43:12.636496    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetConfigRaw
I0610 19:43:12.637207    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
I0610 19:43:12.637456    9839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
I0610 19:43:12.637932    9839 machine.go:94] provisionDockerMachine start ...
I0610 19:43:12.637943    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:12.638120    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:12.638228    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:12.638329    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:12.638444    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:12.638565    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:12.638699    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:12.638969    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:12.638983    9839 main.go:141] libmachine: About to run SSH command:
hostname
I0610 19:43:12.642645    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0610 19:43:12.651837    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0610 19:43:12.653126    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0610 19:43:12.653161    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0610 19:43:12.653186    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0610 19:43:12.653204    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0610 19:43:13.038643    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0610 19:43:13.038660    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0610 19:43:13.153576    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0610 19:43:13.153596    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0610 19:43:13.153619    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0610 19:43:13.153631    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0610 19:43:13.154475    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0610 19:43:13.154484    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0610 19:43:18.506079    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0610 19:43:18.506146    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0610 19:43:18.506154    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0610 19:43:18.529586    9839 main.go:141] libmachine: (multinode-353000-m03) DBG | 2024/06/10 19:43:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0610 19:43:25.804740    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0610 19:43:25.804756    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
I0610 19:43:25.804926    9839 buildroot.go:166] provisioning hostname "multinode-353000-m03"
I0610 19:43:25.804938    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
I0610 19:43:25.805031    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:25.805125    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:25.805220    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:25.805306    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:25.805390    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:25.805511    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:25.805654    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:25.805663    9839 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-353000-m03 && echo "multinode-353000-m03" | sudo tee /etc/hostname
I0610 19:43:25.868684    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m03

                                                
                                                
I0610 19:43:25.868705    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:25.868857    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:25.868948    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:25.869031    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:25.869118    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:25.869271    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:25.869480    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:25.869493    9839 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-353000-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-353000-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0610 19:43:25.928390    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0610 19:43:25.928413    9839 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
I0610 19:43:25.928430    9839 buildroot.go:174] setting up certificates
I0610 19:43:25.928443    9839 provision.go:84] configureAuth start
I0610 19:43:25.928452    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetMachineName
I0610 19:43:25.928606    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
I0610 19:43:25.928718    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:25.928816    9839 provision.go:143] copyHostCerts
I0610 19:43:25.928846    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
I0610 19:43:25.928917    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
I0610 19:43:25.928925    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
I0610 19:43:25.929059    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
I0610 19:43:25.929256    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
I0610 19:43:25.929296    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
I0610 19:43:25.929301    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
I0610 19:43:25.929391    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
I0610 19:43:25.929535    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
I0610 19:43:25.929574    9839 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
I0610 19:43:25.929578    9839 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
I0610 19:43:25.929669    9839 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
I0610 19:43:25.929823    9839 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m03 san=[127.0.0.1 192.169.0.21 localhost minikube multinode-353000-m03]
I0610 19:43:26.058889    9839 provision.go:177] copyRemoteCerts
I0610 19:43:26.058948    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0610 19:43:26.058966    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:26.059150    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:26.059345    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.059574    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:26.059755    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
I0610 19:43:26.093665    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0610 19:43:26.093756    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0610 19:43:26.112700    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0610 19:43:26.112765    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0610 19:43:26.131814    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
I0610 19:43:26.131879    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I0610 19:43:26.151279    9839 provision.go:87] duration metric: took 222.828616ms to configureAuth
I0610 19:43:26.151292    9839 buildroot.go:189] setting minikube options for container-runtime
I0610 19:43:26.151456    9839 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:43:26.151469    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:26.151609    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:26.151692    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:26.151777    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.151874    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.151972    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:26.152074    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:26.152198    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:26.152205    9839 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0610 19:43:26.206329    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0610 19:43:26.206341    9839 buildroot.go:70] root file system type: tmpfs
I0610 19:43:26.206411    9839 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0610 19:43:26.206426    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:26.206547    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:26.206627    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.206701    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.206779    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:26.206894    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:26.207036    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:26.207079    9839 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0610 19:43:26.271573    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0610 19:43:26.271595    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:26.271735    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:26.271830    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.271923    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:26.272014    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:26.272144    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:26.272283    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:26.272295    9839 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0610 19:43:27.824055    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0610 19:43:27.824071    9839 machine.go:97] duration metric: took 15.186657886s to provisionDockerMachine
I0610 19:43:27.824082    9839 start.go:293] postStartSetup for "multinode-353000-m03" (driver="hyperkit")
I0610 19:43:27.824091    9839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0610 19:43:27.824102    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:27.824287    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0610 19:43:27.824303    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:27.824414    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:27.824515    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:27.824601    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:27.824685    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
I0610 19:43:27.864127    9839 ssh_runner.go:195] Run: cat /etc/os-release
I0610 19:43:27.868546    9839 info.go:137] Remote host: Buildroot 2023.02.9
I0610 19:43:27.868563    9839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
I0610 19:43:27.868710    9839 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
I0610 19:43:27.868916    9839 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
I0610 19:43:27.868922    9839 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
I0610 19:43:27.869134    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0610 19:43:27.880487    9839 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
I0610 19:43:27.915122    9839 start.go:296] duration metric: took 91.029673ms for postStartSetup
I0610 19:43:27.915147    9839 fix.go:56] duration metric: took 15.357055265s for fixHost
I0610 19:43:27.915159    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:27.915293    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:27.915401    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:27.915484    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:27.915567    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:27.915677    9839 main.go:141] libmachine: Using SSH client type: native
I0610 19:43:27.915819    9839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb5d0f00] 0xb5d3c60 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
I0610 19:43:27.915832    9839 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0610 19:43:27.969555    9839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073808.315530144

                                                
                                                
I0610 19:43:27.969566    9839 fix.go:216] guest clock: 1718073808.315530144
I0610 19:43:27.969571    9839 fix.go:229] Guest: 2024-06-10 19:43:28.315530144 -0700 PDT Remote: 2024-06-10 19:43:27.91515 -0700 PDT m=+15.450960394 (delta=400.380144ms)
I0610 19:43:27.969594    9839 fix.go:200] guest clock delta is within tolerance: 400.380144ms
I0610 19:43:27.969604    9839 start.go:83] releasing machines lock for "multinode-353000-m03", held for 15.411555485s
I0610 19:43:27.969624    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:27.969749    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
I0610 19:43:27.969849    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:27.970170    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:27.970269    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
I0610 19:43:27.970351    9839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0610 19:43:27.970384    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:27.970425    9839 ssh_runner.go:195] Run: systemctl --version
I0610 19:43:27.970436    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
I0610 19:43:27.970471    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:27.970506    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
I0610 19:43:27.970548    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:27.970570    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
I0610 19:43:27.970645    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:27.970659    9839 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
I0610 19:43:27.970737    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
I0610 19:43:27.970755    9839 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
I0610 19:43:28.001679    9839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0610 19:43:28.053397    9839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0610 19:43:28.053511    9839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0610 19:43:28.068040    9839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0610 19:43:28.068055    9839 start.go:494] detecting cgroup driver to use...
I0610 19:43:28.068157    9839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0610 19:43:28.083444    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0610 19:43:28.092622    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0610 19:43:28.101546    9839 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0610 19:43:28.101594    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0610 19:43:28.110638    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0610 19:43:28.119615    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0610 19:43:28.128848    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0610 19:43:28.141299    9839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0610 19:43:28.151747    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0610 19:43:28.160510    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0610 19:43:28.168962    9839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0610 19:43:28.177615    9839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0610 19:43:28.185433    9839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0610 19:43:28.193029    9839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0610 19:43:28.290399    9839 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0610 19:43:28.309700    9839 start.go:494] detecting cgroup driver to use...
I0610 19:43:28.309773    9839 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0610 19:43:28.326304    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0610 19:43:28.339423    9839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0610 19:43:28.359836    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0610 19:43:28.370954    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0610 19:43:28.382062    9839 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0610 19:43:28.413605    9839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0610 19:43:28.423953    9839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0610 19:43:28.439193    9839 ssh_runner.go:195] Run: which cri-dockerd
I0610 19:43:28.442081    9839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0610 19:43:28.449345    9839 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0610 19:43:28.463541    9839 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0610 19:43:28.563785    9839 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0610 19:43:28.665938    9839 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0610 19:43:28.666021    9839 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0610 19:43:28.680584    9839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0610 19:43:28.773525    9839 ssh_runner.go:195] Run: sudo systemctl restart docker
I0610 19:44:29.808614    9839 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.037190692s)
I0610 19:44:29.808676    9839 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0610 19:44:29.845338    9839 out.go:177] 
W0610 19:44:29.866976    9839 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jun 11 02:43:26 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.942866425Z" level=info msg="Starting up"
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.943763668Z" level=info msg="containerd not running, starting managed containerd"
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.944369312Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.963212247Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978801260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978878068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978940997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978976313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979090976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979146579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979281764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979324699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979358968Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979388949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979509469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979673869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981224964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981280787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981417525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981461093Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981614004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981665659Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981698019Z" level=info msg="metadata content store policy set" policy=shared
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982592626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982648000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982684065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982718166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982750040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982814241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983031848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983121896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983158117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983191363Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983222051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983251729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983281020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983310646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983365464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983424196Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983456426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983490080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983532546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983566917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983597251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983626815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983656366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983688471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983717622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983747001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983776597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983807814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983836828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983866074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983899521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983932007Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983971051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984002474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984033203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984105597Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984151257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984184338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984216206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984244573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984272421Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984300228Z" level=info msg="NRI interface is disabled by configuration."
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984488222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984552119Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984643939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984685931Z" level=info msg="containerd successfully booted in 0.022307s"
Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.964317792Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.975720029Z" level=info msg="Loading containers: start."
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.095278957Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.131300004Z" level=info msg="Loading containers: done."
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148511044Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148856577Z" level=info msg="Daemon has completed initialization"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167128553Z" level=info msg="API listen on /var/run/docker.sock"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167290954Z" level=info msg="API listen on [::]:2376"
Jun 11 02:43:28 multinode-353000-m03 systemd[1]: Started Docker Application Container Engine.
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.132181324Z" level=info msg="Processing signal 'terminated'"
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.133520177Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 11 02:43:29 multinode-353000-m03 systemd[1]: Stopping Docker Application Container Engine...
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134062466Z" level=info msg="Daemon shutdown complete"
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134155736Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134172085Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: docker.service: Deactivated successfully.
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Stopped Docker Application Container Engine.
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 11 02:43:30 multinode-353000-m03 dockerd[870]: time="2024-06-11T02:43:30.183722279Z" level=info msg="Starting up"
Jun 11 02:44:30 multinode-353000-m03 dockerd[870]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jun 11 02:43:26 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.942866425Z" level=info msg="Starting up"
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.943763668Z" level=info msg="containerd not running, starting managed containerd"
Jun 11 02:43:26 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:26.944369312Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.963212247Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978801260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978878068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978940997Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.978976313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979090976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979146579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979281764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979324699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979358968Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979388949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979509469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.979673869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981224964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981280787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981417525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981461093Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981614004Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981665659Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.981698019Z" level=info msg="metadata content store policy set" policy=shared
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982592626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982648000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982684065Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982718166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982750040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.982814241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983031848Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983121896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983158117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983191363Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983222051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983251729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983281020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983310646Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983365464Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983424196Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983456426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983490080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983532546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983566917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983597251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983626815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983656366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983688471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983717622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983747001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983776597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983807814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983836828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983866074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983899521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983932007Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.983971051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984002474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984033203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984105597Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984151257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984184338Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984216206Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984244573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984272421Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984300228Z" level=info msg="NRI interface is disabled by configuration."
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984488222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984552119Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984643939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jun 11 02:43:26 multinode-353000-m03 dockerd[500]: time="2024-06-11T02:43:26.984685931Z" level=info msg="containerd successfully booted in 0.022307s"
Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.964317792Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jun 11 02:43:27 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:27.975720029Z" level=info msg="Loading containers: start."
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.095278957Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.131300004Z" level=info msg="Loading containers: done."
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148511044Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.148856577Z" level=info msg="Daemon has completed initialization"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167128553Z" level=info msg="API listen on /var/run/docker.sock"
Jun 11 02:43:28 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:28.167290954Z" level=info msg="API listen on [::]:2376"
Jun 11 02:43:28 multinode-353000-m03 systemd[1]: Started Docker Application Container Engine.
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.132181324Z" level=info msg="Processing signal 'terminated'"
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.133520177Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jun 11 02:43:29 multinode-353000-m03 systemd[1]: Stopping Docker Application Container Engine...
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134062466Z" level=info msg="Daemon shutdown complete"
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134155736Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jun 11 02:43:29 multinode-353000-m03 dockerd[494]: time="2024-06-11T02:43:29.134172085Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: docker.service: Deactivated successfully.
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Stopped Docker Application Container Engine.
Jun 11 02:43:30 multinode-353000-m03 systemd[1]: Starting Docker Application Container Engine...
Jun 11 02:43:30 multinode-353000-m03 dockerd[870]: time="2024-06-11T02:43:30.183722279Z" level=info msg="Starting up"
Jun 11 02:44:30 multinode-353000-m03 dockerd[870]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 11 02:44:30 multinode-353000-m03 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0610 19:44:29.867073    9839 out.go:239] * 
* 
W0610 19:44:29.882762    9839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 19:44:29.903866    9839 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-353000 node start m03 -v=7 --alsologtostderr": exit status 90
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (320.623378ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:29.998430    9856 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:29.998714    9856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:29.998720    9856 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:29.998724    9856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:29.998896    9856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:29.999087    9856 out.go:298] Setting JSON to false
	I0610 19:44:29.999116    9856 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:29.999162    9856 notify.go:220] Checking for updates...
	I0610 19:44:29.999444    9856 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:29.999460    9856 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:29.999833    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:29.999889    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.009377    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53535
	I0610 19:44:30.009712    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.010132    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.010148    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.010367    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.010498    9856 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:30.010584    9856 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:30.010665    9856 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:30.011669    9856 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:30.011687    9856 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:30.011928    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.011947    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.020403    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53537
	I0610 19:44:30.020725    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.021044    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.021054    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.021286    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.021392    9856 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:30.021483    9856 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:30.021730    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.021757    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.030282    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53539
	I0610 19:44:30.030618    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.030958    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.030975    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.031183    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.031279    9856 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:30.031418    9856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:30.031441    9856 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:30.031543    9856 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:30.031616    9856 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:30.031697    9856 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:30.031778    9856 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:30.064597    9856 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:30.069097    9856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:30.081479    9856 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:30.081505    9856 api_server.go:166] Checking apiserver status ...
	I0610 19:44:30.081547    9856 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:30.093290    9856 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:30.100703    9856 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:30.100750    9856 ssh_runner.go:195] Run: ls
	I0610 19:44:30.104015    9856 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:30.107835    9856 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:30.107847    9856 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:30.107858    9856 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:30.107869    9856 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:30.108169    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.108191    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.122063    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53543
	I0610 19:44:30.122418    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.122748    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.122758    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.122977    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.123087    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:30.123170    9856 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:30.123264    9856 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:30.124278    9856 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:30.124285    9856 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:30.124528    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.124549    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.132984    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53545
	I0610 19:44:30.133323    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.133625    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.133635    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.133839    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.133939    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:30.134021    9856 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:30.134282    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.134303    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.142794    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53547
	I0610 19:44:30.143108    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.143462    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.143474    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.143708    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.143831    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:30.143961    9856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:30.143973    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:30.144048    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:30.144138    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:30.144262    9856 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:30.144345    9856 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:30.177428    9856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:30.188474    9856 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:30.188495    9856 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:30.188784    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.188806    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.197660    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53550
	I0610 19:44:30.197978    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.198311    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.198328    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.198556    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.198674    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:30.198752    9856 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:30.198849    9856 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:30.199853    9856 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:30.199864    9856 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:30.200143    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.200172    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.208805    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53552
	I0610 19:44:30.209149    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.209504    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.209513    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.209706    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.209817    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:30.209899    9856 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:30.210157    9856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.210179    9856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.218748    9856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53554
	I0610 19:44:30.219090    9856 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.219424    9856 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.219435    9856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.219628    9856 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.219734    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:30.219862    9856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:30.219872    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:30.219950    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:30.220026    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:30.220107    9856 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:30.220190    9856 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:30.250707    9856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:30.261673    9856 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (318.190701ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:30.908223    9867 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:30.908507    9867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:30.908513    9867 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:30.908517    9867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:30.908705    9867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:30.908893    9867 out.go:298] Setting JSON to false
	I0610 19:44:30.908915    9867 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:30.908955    9867 notify.go:220] Checking for updates...
	I0610 19:44:30.909240    9867 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:30.909256    9867 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:30.909657    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.909709    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.918925    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53558
	I0610 19:44:30.919285    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.919693    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.919714    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.919931    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.920041    9867 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:30.920129    9867 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:30.920202    9867 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:30.921231    9867 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:30.921253    9867 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:30.921528    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.921555    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.930168    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53560
	I0610 19:44:30.930496    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.930806    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.930825    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.931056    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.931169    9867 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:30.931251    9867 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:30.931505    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:30.931532    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:30.939910    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53562
	I0610 19:44:30.940240    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:30.940559    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:30.940569    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:30.940773    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:30.940874    9867 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:30.941012    9867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:30.941034    9867 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:30.941118    9867 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:30.941195    9867 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:30.941282    9867 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:30.941366    9867 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:30.974369    9867 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:30.979147    9867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:30.989890    9867 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:30.989913    9867 api_server.go:166] Checking apiserver status ...
	I0610 19:44:30.989949    9867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:31.000787    9867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:31.011403    9867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:31.011453    9867 ssh_runner.go:195] Run: ls
	I0610 19:44:31.014842    9867 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:31.018148    9867 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:31.018168    9867 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:31.018177    9867 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:31.018192    9867 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:31.018450    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.018477    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.031833    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53566
	I0610 19:44:31.032173    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.032528    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.032549    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.032751    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.032869    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:31.032958    9867 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:31.033033    9867 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:31.034067    9867 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:31.034075    9867 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:31.034314    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.034333    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.042792    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53568
	I0610 19:44:31.043096    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.043421    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.043437    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.043659    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.043757    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:31.043845    9867 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:31.044104    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.044135    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.052697    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53570
	I0610 19:44:31.053024    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.053350    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.053361    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.053588    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.053702    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:31.053839    9867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:31.053851    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:31.053927    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:31.054008    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:31.054097    9867 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:31.054162    9867 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:31.086277    9867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:31.096484    9867 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:31.096503    9867 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:31.096788    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.096810    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.105495    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53573
	I0610 19:44:31.105817    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.106171    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.106188    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.106394    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.106537    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:31.106620    9867 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:31.106691    9867 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:31.107711    9867 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:31.107719    9867 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:31.108000    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.108025    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.116687    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53575
	I0610 19:44:31.117016    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.117369    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.117384    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.117586    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.117694    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:31.117781    9867 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:31.118055    9867 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:31.118079    9867 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:31.126609    9867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53577
	I0610 19:44:31.126956    9867 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:31.127317    9867 main.go:141] libmachine: Using API Version  1
	I0610 19:44:31.127335    9867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:31.127556    9867 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:31.127668    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:31.127789    9867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:31.127801    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:31.127878    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:31.127991    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:31.128074    9867 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:31.128154    9867 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:31.158191    9867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:31.168762    9867 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (317.046424ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:32.855611    9878 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:32.855814    9878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:32.855820    9878 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:32.855824    9878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:32.856001    9878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:32.856179    9878 out.go:298] Setting JSON to false
	I0610 19:44:32.856201    9878 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:32.856237    9878 notify.go:220] Checking for updates...
	I0610 19:44:32.856520    9878 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:32.856536    9878 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:32.856891    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.856939    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.865801    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53581
	I0610 19:44:32.866119    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:32.866513    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:32.866530    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:32.866739    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:32.866844    9878 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:32.866932    9878 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:32.866988    9878 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:32.868035    9878 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:32.868056    9878 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:32.868294    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.868313    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.876867    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53583
	I0610 19:44:32.877191    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:32.877583    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:32.877612    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:32.877864    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:32.877979    9878 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:32.878072    9878 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:32.878323    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.878353    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.886900    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53585
	I0610 19:44:32.887268    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:32.887576    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:32.887585    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:32.887804    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:32.887904    9878 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:32.888052    9878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:32.888076    9878 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:32.888168    9878 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:32.888245    9878 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:32.888326    9878 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:32.888407    9878 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:32.921385    9878 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:32.926325    9878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:32.938016    9878 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:32.938040    9878 api_server.go:166] Checking apiserver status ...
	I0610 19:44:32.938078    9878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:32.949976    9878 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:32.958410    9878 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:32.958454    9878 ssh_runner.go:195] Run: ls
	I0610 19:44:32.961765    9878 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:32.964881    9878 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:32.964891    9878 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:32.964900    9878 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:32.964911    9878 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:32.965171    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.965193    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.979058    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53589
	I0610 19:44:32.979420    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:32.979782    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:32.979791    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:32.979998    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:32.980109    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:32.980189    9878 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:32.980259    9878 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:32.981342    9878 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:32.981350    9878 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:32.981594    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.981614    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.990018    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53591
	I0610 19:44:32.990340    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:32.990697    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:32.990714    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:32.990932    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:32.991033    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:32.991124    9878 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:32.991390    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:32.991412    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:32.999823    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53593
	I0610 19:44:33.000153    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:33.000538    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:33.000555    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:33.000792    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:33.000914    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:33.001047    9878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:33.001059    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:33.001149    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:33.001225    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:33.001336    9878 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:33.001410    9878 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:33.033387    9878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:33.043519    9878 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:33.043540    9878 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:33.043836    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:33.043861    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:33.052499    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53596
	I0610 19:44:33.052833    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:33.053191    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:33.053203    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:33.053428    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:33.053533    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:33.053619    9878 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:33.053688    9878 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:33.054746    9878 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:33.054756    9878 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:33.055016    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:33.055052    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:33.063523    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53598
	I0610 19:44:33.063843    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:33.064150    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:33.064160    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:33.064384    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:33.064493    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:33.064568    9878 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:33.064815    9878 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:33.064839    9878 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:33.073233    9878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53600
	I0610 19:44:33.073552    9878 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:33.073902    9878 main.go:141] libmachine: Using API Version  1
	I0610 19:44:33.073920    9878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:33.074140    9878 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:33.074257    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:33.074383    9878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:33.074394    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:33.074473    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:33.074555    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:33.074631    9878 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:33.074701    9878 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:33.104686    9878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:33.115485    9878 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (316.132901ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:36.223843    9889 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:36.224133    9889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:36.224138    9889 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:36.224146    9889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:36.224321    9889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:36.224541    9889 out.go:298] Setting JSON to false
	I0610 19:44:36.224564    9889 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:36.224602    9889 notify.go:220] Checking for updates...
	I0610 19:44:36.224863    9889 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:36.224879    9889 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:36.225234    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.225300    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.234337    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53604
	I0610 19:44:36.234670    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.235074    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.235085    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.235353    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.235472    9889 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:36.235549    9889 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:36.235623    9889 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:36.236682    9889 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:36.236700    9889 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:36.236936    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.236956    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.245445    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53606
	I0610 19:44:36.245792    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.246142    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.246161    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.246389    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.246506    9889 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:36.246589    9889 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:36.246840    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.246880    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.255186    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53608
	I0610 19:44:36.255518    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.255847    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.255863    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.256084    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.256198    9889 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:36.256336    9889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:36.256362    9889 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:36.256462    9889 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:36.256545    9889 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:36.256616    9889 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:36.256696    9889 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:36.289216    9889 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:36.293769    9889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:36.304403    9889 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:36.304428    9889 api_server.go:166] Checking apiserver status ...
	I0610 19:44:36.304465    9889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:36.315361    9889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:36.323064    9889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:36.323132    9889 ssh_runner.go:195] Run: ls
	I0610 19:44:36.326417    9889 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:36.330085    9889 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:36.330097    9889 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:36.330106    9889 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:36.330125    9889 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:36.330427    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.330453    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.338993    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53612
	I0610 19:44:36.339319    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.339648    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.339660    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.339882    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.340002    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:36.340086    9889 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:36.340155    9889 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:36.345996    9889 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:36.346011    9889 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:36.346297    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.346320    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.354869    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53614
	I0610 19:44:36.355219    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.355543    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.355557    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.355766    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.355875    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:36.355959    9889 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:36.356208    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.356239    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.364566    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53616
	I0610 19:44:36.364869    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.365224    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.365242    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.365440    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.365548    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:36.365673    9889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:36.365684    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:36.365764    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:36.365843    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:36.365916    9889 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:36.366001    9889 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:36.398257    9889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:36.408509    9889 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:36.408526    9889 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:36.408789    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.408813    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.417410    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53619
	I0610 19:44:36.417724    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.418035    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.418045    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.418239    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.418338    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:36.418428    9889 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:36.418497    9889 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:36.419604    9889 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:36.419615    9889 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:36.419858    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.419886    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.428485    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53621
	I0610 19:44:36.428803    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.429158    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.429174    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.429392    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.429509    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:36.429590    9889 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:36.429861    9889 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:36.429885    9889 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:36.438446    9889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53623
	I0610 19:44:36.438807    9889 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:36.439121    9889 main.go:141] libmachine: Using API Version  1
	I0610 19:44:36.439130    9889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:36.439336    9889 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:36.439442    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:36.439568    9889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:36.439580    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:36.439662    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:36.439737    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:36.439804    9889 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:36.439878    9889 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:36.470552    9889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:36.481246    9889 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (323.105699ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:40.409625    9900 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:40.409897    9900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:40.409903    9900 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:40.409907    9900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:40.410077    9900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:40.410253    9900 out.go:298] Setting JSON to false
	I0610 19:44:40.410276    9900 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:40.410317    9900 notify.go:220] Checking for updates...
	I0610 19:44:40.410619    9900 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:40.410635    9900 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:40.411014    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.411064    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.419961    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53627
	I0610 19:44:40.420304    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.420713    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.420725    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.420938    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.421049    9900 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:40.421149    9900 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:40.421224    9900 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:40.422286    9900 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:40.422307    9900 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:40.422552    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.422587    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.431088    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53629
	I0610 19:44:40.431433    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.431776    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.431789    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.432059    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.432184    9900 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:40.432270    9900 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:40.432509    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.432536    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.440902    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53631
	I0610 19:44:40.441244    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.441604    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.441616    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.441828    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.441937    9900 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:40.442089    9900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:40.442110    9900 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:40.442185    9900 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:40.442273    9900 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:40.442360    9900 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:40.442447    9900 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:40.478371    9900 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:40.483458    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:40.495483    9900 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:40.495508    9900 api_server.go:166] Checking apiserver status ...
	I0610 19:44:40.495551    9900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:40.507135    9900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:40.515143    9900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:40.515214    9900 ssh_runner.go:195] Run: ls
	I0610 19:44:40.518356    9900 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:40.521610    9900 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:40.521621    9900 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:40.521633    9900 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:40.521644    9900 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:40.521884    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.521904    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.536092    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53635
	I0610 19:44:40.536470    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.536794    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.536807    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.537020    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.537134    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:40.537227    9900 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:40.537300    9900 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:40.538371    9900 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:40.538379    9900 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:40.538617    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.538638    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.547129    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53637
	I0610 19:44:40.547457    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.547796    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.547810    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.548012    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.548125    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:40.548208    9900 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:40.548463    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.548489    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.556901    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53639
	I0610 19:44:40.557217    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.557580    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.557595    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.557795    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.557905    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:40.558047    9900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:40.558060    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:40.558138    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:40.558232    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:40.558325    9900 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:40.558406    9900 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:40.590242    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:40.600589    9900 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:40.600605    9900 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:40.600878    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.600901    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.609621    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53642
	I0610 19:44:40.609942    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.610280    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.610296    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.610529    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.610639    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:40.610719    9900 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:40.610784    9900 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:40.611836    9900 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:40.611844    9900 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:40.612087    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.612109    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.620704    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53644
	I0610 19:44:40.621038    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.621359    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.621378    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.621592    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.621711    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:40.621792    9900 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:40.622058    9900 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:40.622087    9900 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:40.630722    9900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53646
	I0610 19:44:40.631058    9900 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:40.631387    9900 main.go:141] libmachine: Using API Version  1
	I0610 19:44:40.631418    9900 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:40.631655    9900 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:40.631793    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:40.631942    9900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:40.631954    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:40.632071    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:40.632160    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:40.632251    9900 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:40.632333    9900 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:40.663739    9900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:40.674385    9900 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (316.243289ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:46.177268    9911 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:46.177561    9911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:46.177567    9911 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:46.177571    9911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:46.177750    9911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:46.177924    9911 out.go:298] Setting JSON to false
	I0610 19:44:46.177947    9911 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:46.177985    9911 notify.go:220] Checking for updates...
	I0610 19:44:46.178259    9911 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:46.178276    9911 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:46.178629    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.178679    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.187366    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53650
	I0610 19:44:46.187692    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.188093    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.188110    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.188354    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.188480    9911 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:46.188576    9911 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:46.188643    9911 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:46.189671    9911 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:46.189692    9911 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:46.189957    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.189977    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.198408    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53652
	I0610 19:44:46.198754    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.199125    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.199142    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.199338    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.199448    9911 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:46.199526    9911 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:46.199764    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.199790    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.208116    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53654
	I0610 19:44:46.208442    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.208794    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.208806    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.209010    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.209129    9911 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:46.209278    9911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:46.209299    9911 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:46.209379    9911 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:46.209457    9911 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:46.209546    9911 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:46.209650    9911 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:46.243254    9911 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:46.247728    9911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:46.258369    9911 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:46.258394    9911 api_server.go:166] Checking apiserver status ...
	I0610 19:44:46.258437    9911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:46.269619    9911 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:46.276703    9911 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:46.276754    9911 ssh_runner.go:195] Run: ls
	I0610 19:44:46.279995    9911 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:46.283744    9911 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:46.283764    9911 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:46.283776    9911 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:46.283788    9911 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:46.284041    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.284065    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.292769    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53658
	I0610 19:44:46.293106    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.293434    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.293444    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.293645    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.293753    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:46.293854    9911 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:46.293967    9911 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:46.300434    9911 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:46.300445    9911 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:46.300702    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.300737    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.309315    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53660
	I0610 19:44:46.309675    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.310076    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.310096    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.310329    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.310437    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:46.310519    9911 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:46.310802    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.310825    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.319827    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53662
	I0610 19:44:46.320171    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.320497    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.320514    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.320748    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.320864    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:46.321006    9911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:46.321016    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:46.321101    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:46.321177    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:46.321273    9911 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:46.321348    9911 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:46.354314    9911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:46.364632    9911 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:46.364662    9911 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:46.365063    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.365093    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.373803    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53665
	I0610 19:44:46.374147    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.374511    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.374526    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.374744    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.374854    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:46.374939    9911 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:46.375010    9911 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:46.376042    9911 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:46.376052    9911 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:46.376295    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.376326    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.384897    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53667
	I0610 19:44:46.385227    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.385588    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.385599    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.385835    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.385955    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:46.386044    9911 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:46.386302    9911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:46.386325    9911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:46.394866    9911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53669
	I0610 19:44:46.395219    9911 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:46.395554    9911 main.go:141] libmachine: Using API Version  1
	I0610 19:44:46.395568    9911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:46.395805    9911 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:46.395928    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:46.396052    9911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:46.396064    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:46.396142    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:46.396211    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:46.396300    9911 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:46.396365    9911 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:46.426514    9911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:46.436655    9911 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (317.235121ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:50.833756    9925 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:50.834044    9925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:50.834050    9925 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:50.834053    9925 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:50.834235    9925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:50.834422    9925 out.go:298] Setting JSON to false
	I0610 19:44:50.834446    9925 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:50.834490    9925 notify.go:220] Checking for updates...
	I0610 19:44:50.834779    9925 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:50.834795    9925 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:50.835163    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.835209    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.844363    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53673
	I0610 19:44:50.844742    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.845150    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.845159    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.845386    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.845508    9925 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:50.845581    9925 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:50.845661    9925 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:50.846715    9925 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:50.846736    9925 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:50.846974    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.846995    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.855495    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53675
	I0610 19:44:50.855833    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.856195    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.856221    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.856454    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.856562    9925 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:50.856652    9925 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:50.856903    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.856928    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.865208    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53677
	I0610 19:44:50.865532    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.865826    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.865834    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.866047    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.866157    9925 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:50.866310    9925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:50.866331    9925 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:50.866424    9925 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:50.866503    9925 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:50.866582    9925 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:50.866670    9925 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:50.901198    9925 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:50.905766    9925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:50.916661    9925 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:50.916686    9925 api_server.go:166] Checking apiserver status ...
	I0610 19:44:50.916725    9925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:50.930006    9925 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:50.938391    9925 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:50.938444    9925 ssh_runner.go:195] Run: ls
	I0610 19:44:50.941761    9925 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:50.945596    9925 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:50.945607    9925 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:50.945616    9925 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:50.945633    9925 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:50.945890    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.945911    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.956066    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53681
	I0610 19:44:50.956406    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.956700    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.956726    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.956942    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.957058    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:50.957142    9925 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:50.957213    9925 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:50.958287    9925 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:50.958297    9925 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:50.958564    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.958585    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.967228    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53683
	I0610 19:44:50.967560    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.967909    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.967928    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.968168    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.968281    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:50.968372    9925 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:50.968644    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:50.968673    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:50.977188    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53685
	I0610 19:44:50.977561    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:50.977930    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:50.977945    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:50.978182    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:50.978300    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:50.978439    9925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:50.978450    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:50.978531    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:50.978616    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:50.978699    9925 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:50.978775    9925 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:51.011344    9925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:51.021844    9925 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:51.021860    9925 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:51.022140    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:51.022164    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:51.030869    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53688
	I0610 19:44:51.031190    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:51.031511    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:51.031525    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:51.031765    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:51.031900    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:51.031980    9925 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:51.032054    9925 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:51.033069    9925 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:51.033077    9925 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:51.033314    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:51.033342    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:51.042075    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53690
	I0610 19:44:51.042415    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:51.042722    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:51.042732    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:51.042951    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:51.043071    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:51.043151    9925 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:51.043409    9925 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:51.043434    9925 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:51.052025    9925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53692
	I0610 19:44:51.052362    9925 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:51.052699    9925 main.go:141] libmachine: Using API Version  1
	I0610 19:44:51.052716    9925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:51.052945    9925 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:51.053064    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:51.053199    9925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:51.053213    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:51.053295    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:51.053372    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:51.053480    9925 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:51.053559    9925 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:51.083519    9925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:51.093427    9925 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (312.047572ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:44:59.718709    9936 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:44:59.718978    9936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:59.718983    9936 out.go:304] Setting ErrFile to fd 2...
	I0610 19:44:59.718987    9936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:44:59.719156    9936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:44:59.719338    9936 out.go:298] Setting JSON to false
	I0610 19:44:59.719360    9936 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:44:59.719399    9936 notify.go:220] Checking for updates...
	I0610 19:44:59.719672    9936 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:44:59.719687    9936 status.go:255] checking status of multinode-353000 ...
	I0610 19:44:59.720090    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.720134    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.728949    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53696
	I0610 19:44:59.729281    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.729734    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.729745    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.730019    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.730146    9936 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:44:59.730257    9936 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:59.730316    9936 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:44:59.731353    9936 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:44:59.731369    9936 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:59.731623    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.731645    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.740070    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53698
	I0610 19:44:59.740410    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.740776    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.740802    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.740996    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.741111    9936 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:44:59.741198    9936 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:44:59.741460    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.741498    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.749954    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53700
	I0610 19:44:59.750282    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.750594    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.750603    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.750813    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.750922    9936 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:44:59.751053    9936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:59.751075    9936 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:44:59.751150    9936 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:44:59.751216    9936 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:44:59.751300    9936 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:44:59.751375    9936 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:44:59.784312    9936 ssh_runner.go:195] Run: systemctl --version
	I0610 19:44:59.788666    9936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:59.800161    9936 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:44:59.800186    9936 api_server.go:166] Checking apiserver status ...
	I0610 19:44:59.800223    9936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:44:59.812045    9936 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:44:59.820109    9936 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:44:59.820156    9936 ssh_runner.go:195] Run: ls
	I0610 19:44:59.823388    9936 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:44:59.826432    9936 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:44:59.826442    9936 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:44:59.826451    9936 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:59.826462    9936 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:44:59.826708    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.826730    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.835413    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53704
	I0610 19:44:59.835753    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.836064    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.836078    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.836291    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.836412    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:44:59.836508    9936 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:59.836580    9936 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:44:59.837661    9936 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:44:59.837671    9936 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:59.837930    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.837950    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.846750    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53706
	I0610 19:44:59.847081    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.847394    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.847405    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.847603    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.847706    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:44:59.847778    9936 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:44:59.848040    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.848061    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.856569    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53708
	I0610 19:44:59.856915    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.857266    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.857279    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.857499    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.857616    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:44:59.857745    9936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:59.857756    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:44:59.857831    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:44:59.857918    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:44:59.857993    9936 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:44:59.858069    9936 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:44:59.890636    9936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:59.900861    9936 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:44:59.900877    9936 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:44:59.901175    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.901200    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.909812    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53711
	I0610 19:44:59.910131    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.910467    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.910481    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.910689    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.910806    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:44:59.910890    9936 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:44:59.910968    9936 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:44:59.912016    9936 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:44:59.912023    9936 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:59.912260    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.912284    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.920795    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53713
	I0610 19:44:59.921126    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.921453    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.921472    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.921685    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.921803    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:44:59.921890    9936 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:44:59.922156    9936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:44:59.922191    9936 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:44:59.930750    9936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53715
	I0610 19:44:59.931080    9936 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:44:59.931451    9936 main.go:141] libmachine: Using API Version  1
	I0610 19:44:59.931470    9936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:44:59.931669    9936 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:44:59.931783    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:44:59.931906    9936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:44:59.931917    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:44:59.931984    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:44:59.932076    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:44:59.932160    9936 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:44:59.932253    9936 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:44:59.962874    9936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:44:59.972686    9936 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr: exit status 2 (313.811518ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:45:11.727966    9952 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:45:11.728246    9952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:11.728252    9952 out.go:304] Setting ErrFile to fd 2...
	I0610 19:45:11.728255    9952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:11.728419    9952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:45:11.728590    9952 out.go:298] Setting JSON to false
	I0610 19:45:11.728611    9952 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:45:11.728649    9952 notify.go:220] Checking for updates...
	I0610 19:45:11.728916    9952 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:11.728933    9952 status.go:255] checking status of multinode-353000 ...
	I0610 19:45:11.729287    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.729325    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.738482    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53719
	I0610 19:45:11.738849    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.739264    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.739288    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.739543    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.739663    9952 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:45:11.739753    9952 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:11.739827    9952 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:45:11.740835    9952 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:45:11.740855    9952 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:45:11.741090    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.741118    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.749462    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53721
	I0610 19:45:11.749794    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.750127    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.750143    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.750339    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.750454    9952 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:11.750530    9952 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:45:11.750831    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.750855    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.759126    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53723
	I0610 19:45:11.759450    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.759791    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.759807    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.760008    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.760106    9952 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:11.760251    9952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:45:11.760274    9952 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:11.760356    9952 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:11.760441    9952 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:11.760529    9952 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:11.760617    9952 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:11.794626    9952 ssh_runner.go:195] Run: systemctl --version
	I0610 19:45:11.799173    9952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:45:11.809702    9952 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:45:11.809726    9952 api_server.go:166] Checking apiserver status ...
	I0610 19:45:11.809763    9952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:45:11.820769    9952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:45:11.827763    9952 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:45:11.827801    9952 ssh_runner.go:195] Run: ls
	I0610 19:45:11.831191    9952 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:45:11.834389    9952 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:45:11.834399    9952 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:45:11.834408    9952 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:45:11.834419    9952 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:45:11.834657    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.834681    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.843370    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53727
	I0610 19:45:11.843727    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.844108    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.844122    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.844345    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.844462    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:45:11.844563    9952 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:11.844639    9952 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:45:11.845781    9952 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:45:11.845794    9952 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:45:11.846064    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.846099    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.854615    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53729
	I0610 19:45:11.854929    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.855261    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.855277    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.855480    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.855578    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:45:11.855660    9952 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:45:11.855917    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.855938    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.864402    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53731
	I0610 19:45:11.864712    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.865044    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.865067    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.865250    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.865388    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:45:11.865514    9952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:45:11.865526    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:45:11.865606    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:45:11.865683    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:45:11.865767    9952 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:45:11.865835    9952 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:45:11.899871    9952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:45:11.911365    9952 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:45:11.911394    9952 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:45:11.911697    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.911722    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.920494    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53734
	I0610 19:45:11.920806    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.921149    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.921165    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.921356    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.921467    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:45:11.921549    9952 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:11.921624    9952 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9843
	I0610 19:45:11.922637    9952 status.go:330] multinode-353000-m03 host status = "Running" (err=<nil>)
	I0610 19:45:11.922645    9952 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:45:11.922891    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.922918    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.931500    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53736
	I0610 19:45:11.931843    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.932172    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.932186    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.932395    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.932502    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetIP
	I0610 19:45:11.932590    9952 host.go:66] Checking if "multinode-353000-m03" exists ...
	I0610 19:45:11.932849    9952 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:11.932871    9952 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:11.941491    9952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53738
	I0610 19:45:11.941831    9952 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:11.942181    9952 main.go:141] libmachine: Using API Version  1
	I0610 19:45:11.942199    9952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:11.942391    9952 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:11.942501    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .DriverName
	I0610 19:45:11.942626    9952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:45:11.942637    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHHostname
	I0610 19:45:11.942716    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHPort
	I0610 19:45:11.942794    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHKeyPath
	I0610 19:45:11.942878    9952 main.go:141] libmachine: (multinode-353000-m03) Calling .GetSSHUsername
	I0610 19:45:11.942963    9952 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m03/id_rsa Username:docker}
	I0610 19:45:11.973962    9952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:45:11.983926    9952 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-353000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-353000 -n multinode-353000
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 logs -n 25: (1.991721384s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-353000 cp multinode-353000:/home/docker/cp-test.txt                                                              | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test_multinode-353000_multinode-353000-m03.txt                                        |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000 sudo cat                                                                                                  |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m03 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000_multinode-353000-m03.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp testdata/cp-test.txt                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m03 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp testdata/cp-test.txt                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02:/home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m02 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-353000 node stop m03                                                                                             | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	| node    | multinode-353000 node start                                                                                                | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                                 |                  |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 19:39:40
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 19:39:40.505851    9512 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:39:40.506113    9512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:39:40.506121    9512 out.go:304] Setting ErrFile to fd 2...
	I0610 19:39:40.506126    9512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:39:40.506309    9512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:39:40.507720    9512 out.go:298] Setting JSON to false
	I0610 19:39:40.529694    9512 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25736,"bootTime":1718047844,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:39:40.529788    9512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:39:40.551305    9512 out.go:177] * [multinode-353000] minikube v1.33.1 on Darwin 14.4.1
	I0610 19:39:40.594853    9512 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:39:40.594931    9512 notify.go:220] Checking for updates...
	I0610 19:39:40.638933    9512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:39:40.659823    9512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:39:40.681018    9512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:39:40.701911    9512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:39:40.722907    9512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:39:40.743965    9512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:39:40.772901    9512 out.go:177] * Using the hyperkit driver based on user configuration
	I0610 19:39:40.814928    9512 start.go:297] selected driver: hyperkit
	I0610 19:39:40.814962    9512 start.go:901] validating driver "hyperkit" against <nil>
	I0610 19:39:40.814986    9512 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:39:40.819561    9512 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:39:40.819673    9512 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 19:39:40.828101    9512 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 19:39:40.831999    9512 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:39:40.832027    9512 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 19:39:40.832067    9512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 19:39:40.832293    9512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:39:40.832353    9512 cni.go:84] Creating CNI manager for ""
	I0610 19:39:40.832364    9512 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 19:39:40.832374    9512 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 19:39:40.832448    9512 start.go:340] cluster config:
	{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:39:40.832532    9512 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:39:40.875838    9512 out.go:177] * Starting "multinode-353000" primary control-plane node in "multinode-353000" cluster
	I0610 19:39:40.896855    9512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:39:40.896930    9512 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 19:39:40.896956    9512 cache.go:56] Caching tarball of preloaded images
	I0610 19:39:40.897167    9512 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:39:40.897187    9512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:39:40.897660    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:39:40.897698    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json: {Name:mk2e142da77a6854037c35c07fc9365ca6062f18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:39:40.899206    9512 start.go:360] acquireMachinesLock for multinode-353000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:39:40.899345    9512 start.go:364] duration metric: took 109.375µs to acquireMachinesLock for "multinode-353000"
	I0610 19:39:40.899397    9512 start.go:93] Provisioning new machine with config: &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 19:39:40.899567    9512 start.go:125] createHost starting for "" (driver="hyperkit")
	I0610 19:39:40.941900    9512 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 19:39:40.942180    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:39:40.942233    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:39:40.952060    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53041
	I0610 19:39:40.952417    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:39:40.952826    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:39:40.952836    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:39:40.953060    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:39:40.953177    9512 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:39:40.953266    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:40.953392    9512 start.go:159] libmachine.API.Create for "multinode-353000" (driver="hyperkit")
	I0610 19:39:40.953421    9512 client.go:168] LocalClient.Create starting
	I0610 19:39:40.953460    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem
	I0610 19:39:40.953513    9512 main.go:141] libmachine: Decoding PEM data...
	I0610 19:39:40.953530    9512 main.go:141] libmachine: Parsing certificate...
	I0610 19:39:40.953602    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem
	I0610 19:39:40.953640    9512 main.go:141] libmachine: Decoding PEM data...
	I0610 19:39:40.953653    9512 main.go:141] libmachine: Parsing certificate...
	I0610 19:39:40.953665    9512 main.go:141] libmachine: Running pre-create checks...
	I0610 19:39:40.953676    9512 main.go:141] libmachine: (multinode-353000) Calling .PreCreateCheck
	I0610 19:39:40.953798    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:40.953967    9512 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:39:40.954439    9512 main.go:141] libmachine: Creating machine...
	I0610 19:39:40.954448    9512 main.go:141] libmachine: (multinode-353000) Calling .Create
	I0610 19:39:40.954546    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:40.954661    9512 main.go:141] libmachine: (multinode-353000) DBG | I0610 19:39:40.954530    9520 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:39:40.954736    9512 main.go:141] libmachine: (multinode-353000) Downloading /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-5942/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 19:39:41.133293    9512 main.go:141] libmachine: (multinode-353000) DBG | I0610 19:39:41.133223    9520 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa...
	I0610 19:39:41.211557    9512 main.go:141] libmachine: (multinode-353000) DBG | I0610 19:39:41.211488    9520 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk...
	I0610 19:39:41.211569    9512 main.go:141] libmachine: (multinode-353000) DBG | Writing magic tar header
	I0610 19:39:41.211578    9512 main.go:141] libmachine: (multinode-353000) DBG | Writing SSH key tar header
	I0610 19:39:41.212472    9512 main.go:141] libmachine: (multinode-353000) DBG | I0610 19:39:41.212373    9520 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000 ...
	I0610 19:39:41.572375    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:41.572403    9512 main.go:141] libmachine: (multinode-353000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid
	I0610 19:39:41.572445    9512 main.go:141] libmachine: (multinode-353000) DBG | Using UUID f0e955cd-5ea6-4315-ac08-1f17bf5837e0
	I0610 19:39:41.675885    9512 main.go:141] libmachine: (multinode-353000) DBG | Generated MAC 6e:10:a7:68:76:8c
	I0610 19:39:41.675905    9512 main.go:141] libmachine: (multinode-353000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:39:41.675957    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:39:41.675993    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00019a630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:39:41.676032    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f0e955cd-5ea6-4315-ac08-1f17bf5837e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/1904
6-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:39:41.676060    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f0e955cd-5ea6-4315-ac08-1f17bf5837e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:39:41.676076    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:39:41.679097    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 DEBUG: hyperkit: Pid is 9523
	I0610 19:39:41.679974    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 0
	I0610 19:39:41.679984    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:41.680072    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:41.681036    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:41.681132    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0610 19:39:41.681163    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:39:41.681181    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:39:41.681199    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:39:41.681211    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:39:41.681227    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:39:41.681240    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:39:41.681254    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:39:41.681276    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:39:41.681291    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:39:41.681299    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:39:41.681307    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:39:41.681330    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:39:41.681339    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:39:41.681347    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:39:41.681356    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:39:41.681366    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:39:41.681375    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:39:41.687017    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:39:41.739815    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:39:41.740429    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:39:41.740441    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:39:41.740448    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:39:41.740457    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:39:42.124036    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:39:42.124051    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:39:42.239329    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:39:42.239367    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:39:42.239379    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:39:42.239388    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:39:42.240268    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:39:42.240279    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:42 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:39:43.682985    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 1
	I0610 19:39:43.683024    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:43.683168    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:43.684106    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:43.684227    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0610 19:39:43.684257    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:39:43.684281    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:39:43.684296    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:39:43.684305    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:39:43.684311    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:39:43.684319    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:39:43.684332    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:39:43.684340    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:39:43.684425    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:39:43.684448    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:39:43.684462    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:39:43.684499    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:39:43.684524    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:39:43.684535    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:39:43.684571    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:39:43.684595    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:39:43.684623    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:39:45.684793    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 2
	I0610 19:39:45.684812    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:45.684858    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:45.685708    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:45.685749    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0610 19:39:45.685760    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:39:45.685772    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:39:45.685793    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:39:45.685805    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:39:45.685814    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:39:45.685821    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:39:45.685840    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:39:45.685854    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:39:45.685861    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:39:45.685869    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:39:45.685875    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:39:45.685880    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:39:45.685899    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:39:45.685910    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:39:45.685918    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:39:45.685926    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:39:45.685940    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:39:47.543012    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:47 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:39:47.543111    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:47 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:39:47.543121    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:47 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:39:47.566952    9512 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:39:47 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:39:47.685979    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 3
	I0610 19:39:47.686004    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:47.686113    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:47.687644    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:47.687731    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0610 19:39:47.687749    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:39:47.687765    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:39:47.687778    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:39:47.687792    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:39:47.687806    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:39:47.687819    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:39:47.687840    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:39:47.687871    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:39:47.687887    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:39:47.687940    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:39:47.687964    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:39:47.687987    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:39:47.688011    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:39:47.688026    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:39:47.688042    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:39:47.688075    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:39:47.688086    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:39:49.687952    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 4
	I0610 19:39:49.687993    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:49.688100    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:49.689154    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:49.689191    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0610 19:39:49.689201    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:39:49.689210    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:39:49.689219    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:39:49.689226    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:39:49.689234    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:39:49.689248    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:39:49.689256    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:39:49.689263    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:39:49.689271    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:39:49.689280    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:39:49.689288    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:39:49.689295    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:39:49.689301    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:39:49.689317    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:39:49.689329    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:39:49.689339    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:39:49.689347    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:39:51.689269    9512 main.go:141] libmachine: (multinode-353000) DBG | Attempt 5
	I0610 19:39:51.689290    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:51.689393    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:51.690213    9512 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:39:51.690273    9512 main.go:141] libmachine: (multinode-353000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:39:51.690289    9512 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:39:51.690301    9512 main.go:141] libmachine: (multinode-353000) DBG | Found match: 6e:10:a7:68:76:8c
	I0610 19:39:51.690308    9512 main.go:141] libmachine: (multinode-353000) DBG | IP: 192.169.0.19
	I0610 19:39:51.690363    9512 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:39:51.690900    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:51.690999    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:51.691095    9512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 19:39:51.691103    9512 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:39:51.691183    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:39:51.691241    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:39:51.692053    9512 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 19:39:51.692063    9512 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 19:39:51.692069    9512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 19:39:51.692075    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:51.692158    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:51.692239    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:51.692326    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:51.692422    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:51.692528    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:51.692725    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:51.692732    9512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 19:39:51.711194    9512 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0610 19:39:54.774055    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:39:54.774070    9512 main.go:141] libmachine: Detecting the provisioner...
	I0610 19:39:54.774076    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:54.774237    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:54.774362    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.774469    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.774561    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:54.774686    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:54.774825    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:54.774832    9512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 19:39:54.833560    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 19:39:54.833611    9512 main.go:141] libmachine: found compatible host: buildroot
	I0610 19:39:54.833616    9512 main.go:141] libmachine: Provisioning with buildroot...
	I0610 19:39:54.833626    9512 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:39:54.833767    9512 buildroot.go:166] provisioning hostname "multinode-353000"
	I0610 19:39:54.833775    9512 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:39:54.833859    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:54.833953    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:54.834046    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.834135    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.834225    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:54.834377    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:54.834522    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:54.834530    9512 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000 && echo "multinode-353000" | sudo tee /etc/hostname
	I0610 19:39:54.903828    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000
	
	I0610 19:39:54.903849    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:54.903993    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:54.904105    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.904206    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:54.904294    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:54.904434    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:54.904575    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:54.904586    9512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:39:54.969522    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:39:54.969542    9512 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:39:54.969564    9512 buildroot.go:174] setting up certificates
	I0610 19:39:54.969574    9512 provision.go:84] configureAuth start
	I0610 19:39:54.969581    9512 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:39:54.969727    9512 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:39:54.969823    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:54.969910    9512 provision.go:143] copyHostCerts
	I0610 19:39:54.969962    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:39:54.970036    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:39:54.970045    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:39:54.970202    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:39:54.970419    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:39:54.970460    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:39:54.970465    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:39:54.970554    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:39:54.970711    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:39:54.970750    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:39:54.970755    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:39:54.970839    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:39:54.971014    9512 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-353000]
	I0610 19:39:55.012361    9512 provision.go:177] copyRemoteCerts
	I0610 19:39:55.012408    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:39:55.012422    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:55.012543    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:55.012629    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.012715    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:55.012802    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:39:55.049377    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:39:55.049456    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:39:55.068268    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:39:55.068334    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 19:39:55.087069    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:39:55.087138    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 19:39:55.107088    9512 provision.go:87] duration metric: took 137.505638ms to configureAuth
	I0610 19:39:55.107103    9512 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:39:55.107257    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:39:55.107270    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:55.107406    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:55.107508    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:55.107593    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.107680    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.107764    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:55.107887    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:55.108018    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:55.108030    9512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:39:55.167516    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:39:55.167528    9512 buildroot.go:70] root file system type: tmpfs
	I0610 19:39:55.167607    9512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:39:55.167621    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:55.167758    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:55.167863    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.167956    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.168044    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:55.168187    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:55.168326    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:55.168372    9512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:39:55.237672    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:39:55.237695    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:55.237841    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:55.237938    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.238030    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:55.238107    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:55.238238    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:55.238399    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:55.238411    9512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:39:56.746851    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:39:56.746881    9512 main.go:141] libmachine: Checking connection to Docker...
	I0610 19:39:56.746893    9512 main.go:141] libmachine: (multinode-353000) Calling .GetURL
	I0610 19:39:56.747052    9512 main.go:141] libmachine: Docker is up and running!
	I0610 19:39:56.747061    9512 main.go:141] libmachine: Reticulating splines...
	I0610 19:39:56.747065    9512 client.go:171] duration metric: took 15.794188116s to LocalClient.Create
	I0610 19:39:56.747078    9512 start.go:167] duration metric: took 15.794235886s to libmachine.API.Create "multinode-353000"
	I0610 19:39:56.747087    9512 start.go:293] postStartSetup for "multinode-353000" (driver="hyperkit")
	I0610 19:39:56.747095    9512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:39:56.747108    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:56.747268    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:39:56.747280    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:56.747367    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:56.747468    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:56.747575    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:56.747677    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:39:56.790677    9512 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:39:56.796089    9512 command_runner.go:130] > NAME=Buildroot
	I0610 19:39:56.796104    9512 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:39:56.796109    9512 command_runner.go:130] > ID=buildroot
	I0610 19:39:56.796115    9512 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:39:56.796121    9512 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:39:56.796153    9512 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:39:56.796163    9512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:39:56.796274    9512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:39:56.796473    9512 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:39:56.796480    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:39:56.796700    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:39:56.805687    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:39:56.831961    9512 start.go:296] duration metric: took 84.867322ms for postStartSetup
	I0610 19:39:56.832025    9512 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:39:56.832809    9512 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:39:56.833042    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:39:56.833416    9512 start.go:128] duration metric: took 15.934390246s to createHost
	I0610 19:39:56.833431    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:56.833606    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:56.833819    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:56.833958    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:56.834164    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:56.834366    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:39:56.834530    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:39:56.834537    9512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:39:56.893805    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073596.745557271
	
	I0610 19:39:56.893818    9512 fix.go:216] guest clock: 1718073596.745557271
	I0610 19:39:56.893823    9512 fix.go:229] Guest: 2024-06-10 19:39:56.745557271 -0700 PDT Remote: 2024-06-10 19:39:56.833424 -0700 PDT m=+16.363695872 (delta=-87.866729ms)
	I0610 19:39:56.893843    9512 fix.go:200] guest clock delta is within tolerance: -87.866729ms
	I0610 19:39:56.893847    9512 start.go:83] releasing machines lock for "multinode-353000", held for 15.995046008s
	I0610 19:39:56.893866    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:56.893997    9512 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:39:56.894075    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:56.894354    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:56.894455    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:39:56.894536    9512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:39:56.894562    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:56.894610    9512 ssh_runner.go:195] Run: cat /version.json
	I0610 19:39:56.894624    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:39:56.894657    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:56.894742    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:56.894758    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:39:56.894854    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:56.894873    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:39:56.894946    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:39:56.894972    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:39:56.895047    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:39:56.925095    9512 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 19:39:56.925291    9512 ssh_runner.go:195] Run: systemctl --version
	I0610 19:39:56.929635    9512 command_runner.go:130] > systemd 252 (252)
	I0610 19:39:56.929659    9512 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 19:39:56.929888    9512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:39:56.978993    9512 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:39:56.979987    9512 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:39:56.980032    9512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:39:56.980131    9512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:39:56.994661    9512 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:39:56.994856    9512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:39:56.994871    9512 start.go:494] detecting cgroup driver to use...
	I0610 19:39:56.994984    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:39:57.009504    9512 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:39:57.009755    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:39:57.018788    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:39:57.027852    9512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:39:57.027900    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:39:57.037161    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:39:57.046048    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:39:57.054961    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:39:57.064322    9512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:39:57.073550    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:39:57.082455    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:39:57.091268    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:39:57.100198    9512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:39:57.108065    9512 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:39:57.108203    9512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:39:57.116147    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:39:57.218596    9512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:39:57.235791    9512 start.go:494] detecting cgroup driver to use...
	I0610 19:39:57.235868    9512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:39:57.247573    9512 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:39:57.248550    9512 command_runner.go:130] > [Unit]
	I0610 19:39:57.248557    9512 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:39:57.248564    9512 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:39:57.248569    9512 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:39:57.248574    9512 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:39:57.248579    9512 command_runner.go:130] > StartLimitBurst=3
	I0610 19:39:57.248583    9512 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:39:57.248586    9512 command_runner.go:130] > [Service]
	I0610 19:39:57.248590    9512 command_runner.go:130] > Type=notify
	I0610 19:39:57.248606    9512 command_runner.go:130] > Restart=on-failure
	I0610 19:39:57.248615    9512 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:39:57.248632    9512 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:39:57.248638    9512 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:39:57.248643    9512 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:39:57.248653    9512 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:39:57.248660    9512 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:39:57.248666    9512 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:39:57.248674    9512 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:39:57.248680    9512 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:39:57.248684    9512 command_runner.go:130] > ExecStart=
	I0610 19:39:57.248698    9512 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:39:57.248702    9512 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:39:57.248709    9512 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:39:57.248715    9512 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:39:57.248718    9512 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:39:57.248722    9512 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:39:57.248725    9512 command_runner.go:130] > LimitCORE=infinity
	I0610 19:39:57.248731    9512 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:39:57.248737    9512 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:39:57.248741    9512 command_runner.go:130] > TasksMax=infinity
	I0610 19:39:57.248747    9512 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:39:57.248754    9512 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:39:57.248757    9512 command_runner.go:130] > Delegate=yes
	I0610 19:39:57.248762    9512 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:39:57.248766    9512 command_runner.go:130] > KillMode=process
	I0610 19:39:57.248769    9512 command_runner.go:130] > [Install]
	I0610 19:39:57.248778    9512 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:39:57.248887    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:39:57.262587    9512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:39:57.276697    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:39:57.287817    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:39:57.298842    9512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:39:57.336484    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:39:57.347064    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:39:57.362036    9512 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:39:57.362429    9512 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:39:57.365202    9512 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:39:57.365424    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:39:57.372687    9512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:39:57.386110    9512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:39:57.483038    9512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:39:57.592071    9512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:39:57.592144    9512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:39:57.606706    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:39:57.716293    9512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:39:59.985238    9512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.269000999s)
	I0610 19:39:59.985299    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 19:39:59.996868    9512 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 19:40:00.011446    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:40:00.023714    9512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 19:40:00.132008    9512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 19:40:00.233751    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:40:00.348224    9512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 19:40:00.367573    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:40:00.382500    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:40:00.477482    9512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 19:40:00.535138    9512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 19:40:00.535213    9512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 19:40:00.539777    9512 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 19:40:00.539791    9512 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 19:40:00.539807    9512 command_runner.go:130] > Device: 0,22	Inode: 807         Links: 1
	I0610 19:40:00.539814    9512 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 19:40:00.539818    9512 command_runner.go:130] > Access: 2024-06-11 02:40:00.342465553 +0000
	I0610 19:40:00.539826    9512 command_runner.go:130] > Modify: 2024-06-11 02:40:00.342465553 +0000
	I0610 19:40:00.539831    9512 command_runner.go:130] > Change: 2024-06-11 02:40:00.345465553 +0000
	I0610 19:40:00.539834    9512 command_runner.go:130] >  Birth: -
	I0610 19:40:00.539845    9512 start.go:562] Will wait 60s for crictl version
	I0610 19:40:00.539895    9512 ssh_runner.go:195] Run: which crictl
	I0610 19:40:00.542817    9512 command_runner.go:130] > /usr/bin/crictl
	I0610 19:40:00.542989    9512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 19:40:00.574656    9512 command_runner.go:130] > Version:  0.1.0
	I0610 19:40:00.574669    9512 command_runner.go:130] > RuntimeName:  docker
	I0610 19:40:00.574673    9512 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 19:40:00.574677    9512 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 19:40:00.575605    9512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 19:40:00.575677    9512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:40:00.591945    9512 command_runner.go:130] > 26.1.4
	I0610 19:40:00.592781    9512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:40:00.607335    9512 command_runner.go:130] > 26.1.4
	I0610 19:40:00.651412    9512 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 19:40:00.651507    9512 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:40:00.651909    9512 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0610 19:40:00.656501    9512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:40:00.666959    9512 kubeadm.go:877] updating cluster {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 19:40:00.667028    9512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:40:00.667091    9512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:40:00.677368    9512 docker.go:685] Got preloaded images: 
	I0610 19:40:00.677381    9512 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0610 19:40:00.677440    9512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 19:40:00.685479    9512 command_runner.go:139] > {"Repositories":{}}
	I0610 19:40:00.685713    9512 ssh_runner.go:195] Run: which lz4
	I0610 19:40:00.688284    9512 command_runner.go:130] > /usr/bin/lz4
	I0610 19:40:00.688406    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 19:40:00.688519    9512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 19:40:00.691366    9512 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 19:40:00.691478    9512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 19:40:00.691499    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359564351 bytes)
	I0610 19:40:01.791811    9512 docker.go:649] duration metric: took 1.103370384s to copy over tarball
	I0610 19:40:01.791873    9512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 19:40:04.569274    9512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.777481604s)
	I0610 19:40:04.569289    9512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 19:40:04.595851    9512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 19:40:04.604503    9512 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.1":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea":"sha256:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.1":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52":"sha256:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.1":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c":"sha256:747097150317f99937cabea484cff90097a2dbd79e7eb348b
71dc0af879883cd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.1":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036":"sha256:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0610 19:40:04.604610    9512 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0610 19:40:04.618209    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:40:04.717328    9512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:40:07.014977    9512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.297711363s)
	I0610 19:40:07.015073    9512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:40:07.027546    9512 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:40:07.027559    9512 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:40:07.027563    9512 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:40:07.027580    9512 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:40:07.027585    9512 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:40:07.027590    9512 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:40:07.027594    9512 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:40:07.027599    9512 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:40:07.028108    9512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 19:40:07.028126    9512 cache_images.go:84] Images are preloaded, skipping loading
	I0610 19:40:07.028140    9512 kubeadm.go:928] updating node { 192.169.0.19 8443 v1.30.1 docker true true} ...
	I0610 19:40:07.028220    9512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 19:40:07.028292    9512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 19:40:07.047032    9512 command_runner.go:130] > cgroupfs
	I0610 19:40:07.047963    9512 cni.go:84] Creating CNI manager for ""
	I0610 19:40:07.047974    9512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 19:40:07.047983    9512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 19:40:07.048000    9512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353000 NodeName:multinode-353000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 19:40:07.048090    9512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-353000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 19:40:07.048151    9512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 19:40:07.056967    9512 command_runner.go:130] > kubeadm
	I0610 19:40:07.056979    9512 command_runner.go:130] > kubectl
	I0610 19:40:07.056982    9512 command_runner.go:130] > kubelet
	I0610 19:40:07.057026    9512 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 19:40:07.057075    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 19:40:07.065306    9512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0610 19:40:07.078988    9512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 19:40:07.092263    9512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0610 19:40:07.105821    9512 ssh_runner.go:195] Run: grep 192.169.0.19	control-plane.minikube.internal$ /etc/hosts
	I0610 19:40:07.108638    9512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:40:07.118712    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:40:07.218256    9512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:40:07.234153    9512 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000 for IP: 192.169.0.19
	I0610 19:40:07.234168    9512 certs.go:194] generating shared ca certs ...
	I0610 19:40:07.234193    9512 certs.go:226] acquiring lock for ca certs: {Name:mkb8782270d93d160af8329e99f7f211e7b6b737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.234427    9512 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key
	I0610 19:40:07.234505    9512 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key
	I0610 19:40:07.234516    9512 certs.go:256] generating profile certs ...
	I0610 19:40:07.234603    9512 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key
	I0610 19:40:07.234618    9512 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt with IP's: []
	I0610 19:40:07.469653    9512 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt ...
	I0610 19:40:07.469671    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt: {Name:mkbab8a297d1b99c2a3b4945d291d5a135214994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.471111    9512 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key ...
	I0610 19:40:07.471129    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key: {Name:mk1b96b5815f11f71bf9eef0822178e6ea46e5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.472117    9512 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6
	I0610 19:40:07.472138    9512 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt.6aa173b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.19]
	I0610 19:40:07.603750    9512 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt.6aa173b6 ...
	I0610 19:40:07.603765    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt.6aa173b6: {Name:mk207f17afa65dbefc6833b46da03bae458474e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.607074    9512 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6 ...
	I0610 19:40:07.607093    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6: {Name:mk4f5faa898e6cdb7a0fd516aea000ce5de245f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.607448    9512 certs.go:381] copying /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt.6aa173b6 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt
	I0610 19:40:07.607670    9512 certs.go:385] copying /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key
	I0610 19:40:07.607892    9512 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key
	I0610 19:40:07.607916    9512 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt with IP's: []
	I0610 19:40:07.780625    9512 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt ...
	I0610 19:40:07.780640    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt: {Name:mk083373cd3093eacf9eea2c9ab2b073252aff23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.780960    9512 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key ...
	I0610 19:40:07.780970    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key: {Name:mk7d8a81e71f3a720e153ec38318cae5be4034a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:07.781181    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 19:40:07.781209    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 19:40:07.781229    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 19:40:07.781255    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 19:40:07.781280    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 19:40:07.781306    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 19:40:07.781330    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 19:40:07.781349    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 19:40:07.781458    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem (1338 bytes)
	W0610 19:40:07.781510    9512 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485_empty.pem, impossibly tiny 0 bytes
	I0610 19:40:07.781518    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 19:40:07.781550    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem (1082 bytes)
	I0610 19:40:07.781580    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem (1123 bytes)
	I0610 19:40:07.781609    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem (1679 bytes)
	I0610 19:40:07.781676    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:40:07.781716    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /usr/share/ca-certificates/64852.pem
	I0610 19:40:07.781737    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:40:07.781757    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem -> /usr/share/ca-certificates/6485.pem
	I0610 19:40:07.782153    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 19:40:07.805564    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 19:40:07.825772    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 19:40:07.844778    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 19:40:07.863813    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 19:40:07.883313    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 19:40:07.903604    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 19:40:07.936422    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 19:40:07.959279    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /usr/share/ca-certificates/64852.pem (1708 bytes)
	I0610 19:40:07.979863    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 19:40:07.999102    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem --> /usr/share/ca-certificates/6485.pem (1338 bytes)
	I0610 19:40:08.018166    9512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 19:40:08.031616    9512 ssh_runner.go:195] Run: openssl version
	I0610 19:40:08.035594    9512 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 19:40:08.035886    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64852.pem && ln -fs /usr/share/ca-certificates/64852.pem /etc/ssl/certs/64852.pem"
	I0610 19:40:08.044923    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64852.pem
	I0610 19:40:08.048083    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:40:08.048276    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:40:08.048310    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64852.pem
	I0610 19:40:08.052203    9512 command_runner.go:130] > 3ec20f2e
	I0610 19:40:08.052699    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 19:40:08.062210    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 19:40:08.071238    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:40:08.074416    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:40:08.074513    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:40:08.074548    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:40:08.078557    9512 command_runner.go:130] > b5213941
	I0610 19:40:08.078711    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 19:40:08.087728    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6485.pem && ln -fs /usr/share/ca-certificates/6485.pem /etc/ssl/certs/6485.pem"
	I0610 19:40:08.096758    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6485.pem
	I0610 19:40:08.099859    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:40:08.100080    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:40:08.100135    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6485.pem
	I0610 19:40:08.104134    9512 command_runner.go:130] > 51391683
	I0610 19:40:08.104349    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6485.pem /etc/ssl/certs/51391683.0"
	I0610 19:40:08.113339    9512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:40:08.116160    9512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 19:40:08.116318    9512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 19:40:08.116364    9512 kubeadm.go:391] StartCluster: {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:40:08.116460    9512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:40:08.127670    9512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 19:40:08.135573    9512 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 19:40:08.135585    9512 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 19:40:08.135590    9512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 19:40:08.135770    9512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 19:40:08.144243    9512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 19:40:08.152347    9512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 19:40:08.152359    9512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 19:40:08.152366    9512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 19:40:08.152372    9512 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:40:08.152467    9512 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:40:08.152477    9512 kubeadm.go:156] found existing configuration files:
	
	I0610 19:40:08.152514    9512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 19:40:08.160148    9512 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:40:08.160168    9512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:40:08.160203    9512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 19:40:08.168006    9512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 19:40:08.175587    9512 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:40:08.175609    9512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:40:08.175651    9512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 19:40:08.183600    9512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 19:40:08.191185    9512 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:40:08.191201    9512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:40:08.191241    9512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 19:40:08.199061    9512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 19:40:08.206647    9512 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:40:08.206663    9512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:40:08.206701    9512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 19:40:08.214519    9512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 19:40:08.286727    9512 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 19:40:08.286753    9512 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0610 19:40:08.286811    9512 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 19:40:08.286816    9512 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 19:40:08.372589    9512 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 19:40:08.372603    9512 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 19:40:08.372679    9512 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 19:40:08.372687    9512 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 19:40:08.372778    9512 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 19:40:08.372785    9512 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 19:40:08.537397    9512 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 19:40:08.537415    9512 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 19:40:08.561619    9512 out.go:204]   - Generating certificates and keys ...
	I0610 19:40:08.561686    9512 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 19:40:08.561698    9512 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 19:40:08.561767    9512 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 19:40:08.561775    9512 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 19:40:08.745894    9512 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 19:40:08.745938    9512 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 19:40:08.906266    9512 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 19:40:08.906282    9512 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 19:40:09.028828    9512 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 19:40:09.028855    9512 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 19:40:09.254039    9512 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 19:40:09.254046    9512 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 19:40:09.376974    9512 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 19:40:09.376992    9512 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 19:40:09.377145    9512 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-353000] and IPs [192.169.0.19 127.0.0.1 ::1]
	I0610 19:40:09.377151    9512 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-353000] and IPs [192.169.0.19 127.0.0.1 ::1]
	I0610 19:40:09.543634    9512 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 19:40:09.543636    9512 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 19:40:09.543822    9512 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-353000] and IPs [192.169.0.19 127.0.0.1 ::1]
	I0610 19:40:09.543819    9512 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-353000] and IPs [192.169.0.19 127.0.0.1 ::1]
	I0610 19:40:09.621036    9512 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 19:40:09.621048    9512 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 19:40:09.785964    9512 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 19:40:09.785980    9512 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 19:40:09.901757    9512 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 19:40:09.901771    9512 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 19:40:09.902139    9512 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 19:40:09.902151    9512 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 19:40:10.254282    9512 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 19:40:10.254319    9512 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 19:40:10.313595    9512 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 19:40:10.313604    9512 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 19:40:10.495901    9512 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 19:40:10.495913    9512 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 19:40:10.953308    9512 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 19:40:10.953338    9512 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 19:40:11.247056    9512 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 19:40:11.247072    9512 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 19:40:11.247564    9512 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 19:40:11.247593    9512 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 19:40:11.249464    9512 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 19:40:11.270876    9512 out.go:204]   - Booting up control plane ...
	I0610 19:40:11.249497    9512 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 19:40:11.270951    9512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 19:40:11.270957    9512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 19:40:11.271008    9512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 19:40:11.271012    9512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 19:40:11.271057    9512 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 19:40:11.271061    9512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 19:40:11.271151    9512 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:40:11.271153    9512 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:40:11.271221    9512 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:40:11.271227    9512 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:40:11.271256    9512 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 19:40:11.271260    9512 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 19:40:11.376357    9512 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 19:40:11.376368    9512 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 19:40:11.376432    9512 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 19:40:11.376439    9512 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 19:40:11.887487    9512 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 511.448824ms
	I0610 19:40:11.887508    9512 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 511.448824ms
	I0610 19:40:11.887638    9512 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 19:40:11.887661    9512 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 19:40:15.887011    9512 kubeadm.go:309] [api-check] The API server is healthy after 4.002596111s
	I0610 19:40:15.887022    9512 command_runner.go:130] > [api-check] The API server is healthy after 4.002596111s
	I0610 19:40:15.897302    9512 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 19:40:15.897310    9512 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 19:40:15.903974    9512 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 19:40:15.903978    9512 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 19:40:15.919026    9512 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 19:40:15.919042    9512 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 19:40:15.919202    9512 kubeadm.go:309] [mark-control-plane] Marking the node multinode-353000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 19:40:15.919210    9512 command_runner.go:130] > [mark-control-plane] Marking the node multinode-353000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 19:40:15.938969    9512 kubeadm.go:309] [bootstrap-token] Using token: g4wzzt.2z3r7t7mbrw4tgck
	I0610 19:40:15.939001    9512 command_runner.go:130] > [bootstrap-token] Using token: g4wzzt.2z3r7t7mbrw4tgck
	I0610 19:40:15.962194    9512 out.go:204]   - Configuring RBAC rules ...
	I0610 19:40:15.962316    9512 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 19:40:15.962328    9512 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 19:40:15.988952    9512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 19:40:15.988970    9512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 19:40:15.994610    9512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 19:40:15.994620    9512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 19:40:15.996870    9512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 19:40:15.996876    9512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 19:40:15.999441    9512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 19:40:15.999453    9512 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 19:40:16.002408    9512 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 19:40:16.002421    9512 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 19:40:16.295696    9512 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 19:40:16.295704    9512 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 19:40:16.717592    9512 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 19:40:16.717604    9512 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 19:40:17.295313    9512 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 19:40:17.295328    9512 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 19:40:17.296019    9512 kubeadm.go:309] 
	I0610 19:40:17.296064    9512 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 19:40:17.296071    9512 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 19:40:17.296074    9512 kubeadm.go:309] 
	I0610 19:40:17.296153    9512 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 19:40:17.296161    9512 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 19:40:17.296170    9512 kubeadm.go:309] 
	I0610 19:40:17.296203    9512 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 19:40:17.296207    9512 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 19:40:17.296261    9512 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 19:40:17.296266    9512 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 19:40:17.296306    9512 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 19:40:17.296313    9512 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 19:40:17.296316    9512 kubeadm.go:309] 
	I0610 19:40:17.296361    9512 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 19:40:17.296364    9512 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 19:40:17.296371    9512 kubeadm.go:309] 
	I0610 19:40:17.296410    9512 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 19:40:17.296411    9512 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 19:40:17.296421    9512 kubeadm.go:309] 
	I0610 19:40:17.296455    9512 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 19:40:17.296459    9512 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 19:40:17.296518    9512 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 19:40:17.296523    9512 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 19:40:17.296580    9512 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 19:40:17.296584    9512 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 19:40:17.296588    9512 kubeadm.go:309] 
	I0610 19:40:17.296646    9512 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 19:40:17.296649    9512 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 19:40:17.296710    9512 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 19:40:17.296715    9512 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 19:40:17.296718    9512 kubeadm.go:309] 
	I0610 19:40:17.296782    9512 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token g4wzzt.2z3r7t7mbrw4tgck \
	I0610 19:40:17.296786    9512 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token g4wzzt.2z3r7t7mbrw4tgck \
	I0610 19:40:17.296876    9512 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db \
	I0610 19:40:17.296878    9512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db \
	I0610 19:40:17.296900    9512 command_runner.go:130] > 	--control-plane 
	I0610 19:40:17.296905    9512 kubeadm.go:309] 	--control-plane 
	I0610 19:40:17.296908    9512 kubeadm.go:309] 
	I0610 19:40:17.296977    9512 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 19:40:17.296983    9512 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 19:40:17.296988    9512 kubeadm.go:309] 
	I0610 19:40:17.297079    9512 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g4wzzt.2z3r7t7mbrw4tgck \
	I0610 19:40:17.297085    9512 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token g4wzzt.2z3r7t7mbrw4tgck \
	I0610 19:40:17.297167    9512 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db 
	I0610 19:40:17.297171    9512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db 
	I0610 19:40:17.297264    9512 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 19:40:17.297267    9512 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 19:40:17.297282    9512 cni.go:84] Creating CNI manager for ""
	I0610 19:40:17.297287    9512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 19:40:17.320096    9512 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 19:40:17.361581    9512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 19:40:17.365807    9512 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 19:40:17.365822    9512 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 19:40:17.365827    9512 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 19:40:17.365832    9512 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:40:17.365836    9512 command_runner.go:130] > Access: 2024-06-11 02:39:51.255466183 +0000
	I0610 19:40:17.365842    9512 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 19:40:17.365848    9512 command_runner.go:130] > Change: 2024-06-11 02:39:49.698466291 +0000
	I0610 19:40:17.365851    9512 command_runner.go:130] >  Birth: -
	I0610 19:40:17.365895    9512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 19:40:17.365904    9512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 19:40:17.381218    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 19:40:17.554306    9512 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 19:40:17.557247    9512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 19:40:17.561908    9512 command_runner.go:130] > serviceaccount/kindnet created
	I0610 19:40:17.567337    9512 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 19:40:17.569292    9512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 19:40:17.569355    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-353000 minikube.k8s.io/updated_at=2024_06_10T19_40_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-353000 minikube.k8s.io/primary=true
	I0610 19:40:17.569358    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:17.693624    9512 command_runner.go:130] > node/multinode-353000 labeled
	I0610 19:40:17.694755    9512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 19:40:17.694832    9512 command_runner.go:130] > -16
	I0610 19:40:17.694845    9512 ops.go:34] apiserver oom_adj: -16
	I0610 19:40:17.694925    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:17.754734    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:18.195695    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:18.255040    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:18.695258    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:18.757618    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:19.195713    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:19.253425    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:19.696744    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:19.755079    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:20.196126    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:20.256471    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:20.696685    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:20.754736    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:21.195142    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:21.254165    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:21.696011    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:21.758223    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:22.194866    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:22.251411    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:22.694895    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:22.753895    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:23.195493    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:23.254074    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:23.695698    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:23.754970    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:24.196269    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:24.254159    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:24.695251    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:24.752896    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:25.196003    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:25.259805    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:25.694953    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:25.755632    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:26.194965    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:26.253418    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:26.695043    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:26.755997    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:27.195003    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:27.253990    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:27.694837    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:27.759822    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:28.195068    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:28.255042    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:28.694665    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:28.756229    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:29.194994    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:29.257152    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:29.694532    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:29.765004    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:30.195550    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:30.257023    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:30.695117    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:30.757428    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:31.194869    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:31.257563    9512 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 19:40:31.695738    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 19:40:31.757280    9512 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 19:40:31.757358    9512 command_runner.go:130] > default   0         1s
	I0610 19:40:31.758401    9512 kubeadm.go:1107] duration metric: took 14.189589941s to wait for elevateKubeSystemPrivileges
	W0610 19:40:31.758423    9512 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 19:40:31.758429    9512 kubeadm.go:393] duration metric: took 23.642891001s to StartCluster
	I0610 19:40:31.758443    9512 settings.go:142] acquiring lock: {Name:mkfdfd0a396b1866366b70895e6d936c4f7de68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:31.758540    9512 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:40:31.759015    9512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:40:31.759277    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 19:40:31.759290    9512 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 19:40:31.781967    9512 out.go:177] * Verifying Kubernetes components...
	I0610 19:40:31.759307    9512 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 19:40:31.759410    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:40:31.782010    9512 addons.go:69] Setting default-storageclass=true in profile "multinode-353000"
	I0610 19:40:31.822767    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:40:31.782008    9512 addons.go:69] Setting storage-provisioner=true in profile "multinode-353000"
	I0610 19:40:31.822818    9512 addons.go:234] Setting addon storage-provisioner=true in "multinode-353000"
	I0610 19:40:31.822844    9512 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:40:31.822841    9512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-353000"
	I0610 19:40:31.823117    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:31.823133    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:31.823213    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:31.823232    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:31.832854    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53066
	I0610 19:40:31.832869    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53065
	I0610 19:40:31.833005    9512 command_runner.go:130] > apiVersion: v1
	I0610 19:40:31.833017    9512 command_runner.go:130] > data:
	I0610 19:40:31.833021    9512 command_runner.go:130] >   Corefile: |
	I0610 19:40:31.833026    9512 command_runner.go:130] >     .:53 {
	I0610 19:40:31.833032    9512 command_runner.go:130] >         errors
	I0610 19:40:31.833040    9512 command_runner.go:130] >         health {
	I0610 19:40:31.833051    9512 command_runner.go:130] >            lameduck 5s
	I0610 19:40:31.833059    9512 command_runner.go:130] >         }
	I0610 19:40:31.833065    9512 command_runner.go:130] >         ready
	I0610 19:40:31.833075    9512 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 19:40:31.833082    9512 command_runner.go:130] >            pods insecure
	I0610 19:40:31.833092    9512 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 19:40:31.833101    9512 command_runner.go:130] >            ttl 30
	I0610 19:40:31.833107    9512 command_runner.go:130] >         }
	I0610 19:40:31.833191    9512 command_runner.go:130] >         prometheus :9153
	I0610 19:40:31.833205    9512 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 19:40:31.833214    9512 command_runner.go:130] >            max_concurrent 1000
	I0610 19:40:31.833219    9512 command_runner.go:130] >         }
	I0610 19:40:31.833225    9512 command_runner.go:130] >         cache 30
	I0610 19:40:31.833231    9512 command_runner.go:130] >         loop
	I0610 19:40:31.833236    9512 command_runner.go:130] >         reload
	I0610 19:40:31.833242    9512 command_runner.go:130] >         loadbalance
	I0610 19:40:31.833247    9512 command_runner.go:130] >     }
	I0610 19:40:31.833254    9512 command_runner.go:130] > kind: ConfigMap
	I0610 19:40:31.833276    9512 command_runner.go:130] > metadata:
	I0610 19:40:31.833256    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:31.833290    9512 command_runner.go:130] >   creationTimestamp: "2024-06-11T02:40:16Z"
	I0610 19:40:31.833295    9512 command_runner.go:130] >   name: coredns
	I0610 19:40:31.833299    9512 command_runner.go:130] >   namespace: kube-system
	I0610 19:40:31.833303    9512 command_runner.go:130] >   resourceVersion: "221"
	I0610 19:40:31.833307    9512 command_runner.go:130] >   uid: 5ba2f9e9-4920-47c0-93a5-29872a71d88e
	I0610 19:40:31.833375    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:31.833638    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:31.833648    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:31.833727    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:31.833738    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:31.833856    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:31.833977    9512 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:40:31.833987    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:31.834069    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:31.834162    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:40:31.834234    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 19:40:31.834355    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:31.834390    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:31.836160    9512 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:40:31.836975    9512 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xda10600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:40:31.837420    9512 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 19:40:31.837551    9512 addons.go:234] Setting addon default-storageclass=true in "multinode-353000"
	I0610 19:40:31.837586    9512 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:40:31.837810    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:31.837834    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:31.843960    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53069
	I0610 19:40:31.844331    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:31.844845    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:31.844859    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:31.845186    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:31.845345    9512 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:40:31.845565    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:31.845679    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:40:31.846878    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:40:31.867927    9512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:40:31.847552    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53071
	I0610 19:40:31.889127    9512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 19:40:31.889139    9512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 19:40:31.889151    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:40:31.889307    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:40:31.889388    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:31.889424    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:40:31.889539    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:40:31.889645    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:40:31.889783    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:31.889792    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:31.890004    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:31.890415    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:31.890436    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:31.899565    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53074
	I0610 19:40:31.900014    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:31.900390    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:31.900402    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:31.900639    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:31.900797    9512 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:40:31.900891    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:31.900987    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:40:31.902110    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:40:31.902317    9512 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 19:40:31.902326    9512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 19:40:31.902335    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:40:31.902422    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:40:31.902522    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:40:31.902615    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:40:31.902695    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:40:32.015004    9512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:40:32.110486    9512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 19:40:32.148437    9512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 19:40:32.331348    9512 command_runner.go:130] > configmap/coredns replaced
	I0610 19:40:32.331410    9512 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0610 19:40:32.331799    9512 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:40:32.331801    9512 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:40:32.332125    9512 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xda10600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:40:32.332144    9512 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xda10600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:40:32.332407    9512 node_ready.go:35] waiting up to 6m0s for node "multinode-353000" to be "Ready" ...
	I0610 19:40:32.332482    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:32.332482    9512 round_trippers.go:463] GET https://192.169.0.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 19:40:32.332501    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.332503    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.332523    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.332524    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.332528    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.332529    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.342433    9512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 19:40:32.342482    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.342489    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:32 GMT
	I0610 19:40:32.342492    9512 round_trippers.go:580]     Audit-Id: 7a6457f5-5f50-4be8-81b2-576ba09e10e8
	I0610 19:40:32.342497    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.342500    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.342503    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.342506    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.343646    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:32.344100    9512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 19:40:32.344139    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.344147    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.344154    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.344157    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.344160    9512 round_trippers.go:580]     Content-Length: 291
	I0610 19:40:32.344162    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:32 GMT
	I0610 19:40:32.344165    9512 round_trippers.go:580]     Audit-Id: 766db618-ac85-4820-bf04-0f3509424e6e
	I0610 19:40:32.344168    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.344180    9512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7854d1e-12e2-47e4-83ed-7e1d43d78c58","resourceVersion":"357","creationTimestamp":"2024-06-11T02:40:16Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 19:40:32.344420    9512 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7854d1e-12e2-47e4-83ed-7e1d43d78c58","resourceVersion":"357","creationTimestamp":"2024-06-11T02:40:16Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 19:40:32.344470    9512 round_trippers.go:463] PUT https://192.169.0.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 19:40:32.344479    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.344485    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.344489    9512 round_trippers.go:473]     Content-Type: application/json
	I0610 19:40:32.344491    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.349047    9512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:40:32.349060    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.349065    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.349069    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.349071    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.349078    9512 round_trippers.go:580]     Content-Length: 291
	I0610 19:40:32.349081    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:32 GMT
	I0610 19:40:32.349085    9512 round_trippers.go:580]     Audit-Id: 52ee962d-a6ae-4398-88d0-97b9a8b1ed71
	I0610 19:40:32.349089    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.349104    9512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7854d1e-12e2-47e4-83ed-7e1d43d78c58","resourceVersion":"363","creationTimestamp":"2024-06-11T02:40:16Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 19:40:32.372973    9512 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 19:40:32.375093    9512 main.go:141] libmachine: Making call to close driver server
	I0610 19:40:32.375105    9512 main.go:141] libmachine: (multinode-353000) Calling .Close
	I0610 19:40:32.375245    9512 main.go:141] libmachine: (multinode-353000) DBG | Closing plugin on server side
	I0610 19:40:32.375267    9512 main.go:141] libmachine: Successfully made call to close driver server
	I0610 19:40:32.375277    9512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 19:40:32.375285    9512 main.go:141] libmachine: Making call to close driver server
	I0610 19:40:32.375291    9512 main.go:141] libmachine: (multinode-353000) Calling .Close
	I0610 19:40:32.375412    9512 main.go:141] libmachine: Successfully made call to close driver server
	I0610 19:40:32.375420    9512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 19:40:32.375493    9512 round_trippers.go:463] GET https://192.169.0.19:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 19:40:32.375499    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.375505    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.375507    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.377243    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:32.377253    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.377258    9512 round_trippers.go:580]     Audit-Id: fba2845b-6c85-4873-a1ce-d10ba27a9c66
	I0610 19:40:32.377260    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.377263    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.377266    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.377269    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.377272    9512 round_trippers.go:580]     Content-Length: 1273
	I0610 19:40:32.377284    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:32 GMT
	I0610 19:40:32.377348    9512 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"369"},"items":[{"metadata":{"name":"standard","uid":"243cfbc0-4727-4e38-b8dc-bc9f0931b357","resourceVersion":"367","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0610 19:40:32.377587    9512 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"243cfbc0-4727-4e38-b8dc-bc9f0931b357","resourceVersion":"367","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 19:40:32.377617    9512 round_trippers.go:463] PUT https://192.169.0.19:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 19:40:32.377623    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.377629    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.377632    9512 round_trippers.go:473]     Content-Type: application/json
	I0610 19:40:32.377636    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.382403    9512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:40:32.382414    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.382420    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.382424    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.382427    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.382430    9512 round_trippers.go:580]     Content-Length: 1220
	I0610 19:40:32.382433    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:32 GMT
	I0610 19:40:32.382444    9512 round_trippers.go:580]     Audit-Id: 98831db3-8b6b-4296-b8f6-6a51a5b42141
	I0610 19:40:32.382449    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.382497    9512 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"243cfbc0-4727-4e38-b8dc-bc9f0931b357","resourceVersion":"367","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0610 19:40:32.382578    9512 main.go:141] libmachine: Making call to close driver server
	I0610 19:40:32.382587    9512 main.go:141] libmachine: (multinode-353000) Calling .Close
	I0610 19:40:32.382752    9512 main.go:141] libmachine: Successfully made call to close driver server
	I0610 19:40:32.382761    9512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 19:40:32.603927    9512 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 19:40:32.603947    9512 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 19:40:32.603955    9512 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 19:40:32.603961    9512 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 19:40:32.603966    9512 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 19:40:32.603970    9512 command_runner.go:130] > pod/storage-provisioner created
	I0610 19:40:32.603993    9512 main.go:141] libmachine: Making call to close driver server
	I0610 19:40:32.604001    9512 main.go:141] libmachine: (multinode-353000) Calling .Close
	I0610 19:40:32.604184    9512 main.go:141] libmachine: (multinode-353000) DBG | Closing plugin on server side
	I0610 19:40:32.604194    9512 main.go:141] libmachine: Successfully made call to close driver server
	I0610 19:40:32.604201    9512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 19:40:32.604210    9512 main.go:141] libmachine: Making call to close driver server
	I0610 19:40:32.604215    9512 main.go:141] libmachine: (multinode-353000) Calling .Close
	I0610 19:40:32.604349    9512 main.go:141] libmachine: (multinode-353000) DBG | Closing plugin on server side
	I0610 19:40:32.604375    9512 main.go:141] libmachine: Successfully made call to close driver server
	I0610 19:40:32.604388    9512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 19:40:32.627538    9512 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 19:40:32.687584    9512 addons.go:510] duration metric: took 928.311486ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0610 19:40:32.833521    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:32.833538    9512 round_trippers.go:463] GET https://192.169.0.19:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 19:40:32.833541    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.833553    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:32.833560    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.833564    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.833567    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:32.833570    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:32.835821    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:32.835838    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.835846    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.835851    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.835856    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.835872    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.835878    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:33 GMT
	I0610 19:40:32.835881    9512 round_trippers.go:580]     Audit-Id: a13cdfa0-0389-41ac-a582-c2586b96d03b
	I0610 19:40:32.836080    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:32.836334    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:32.836345    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:32.836352    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:32.836356    9512 round_trippers.go:580]     Content-Length: 291
	I0610 19:40:32.836380    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:33 GMT
	I0610 19:40:32.836389    9512 round_trippers.go:580]     Audit-Id: 5bf48f25-6ed3-4e58-8482-a58210a519ee
	I0610 19:40:32.836393    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:32.836397    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:32.836400    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:32.836418    9512 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7854d1e-12e2-47e4-83ed-7e1d43d78c58","resourceVersion":"374","creationTimestamp":"2024-06-11T02:40:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 19:40:32.836476    9512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-353000" context rescaled to 1 replicas
	I0610 19:40:33.333072    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:33.333090    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:33.333099    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:33.333104    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:33.335746    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:33.335758    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:33.335763    9512 round_trippers.go:580]     Audit-Id: 24093915-f553-49bf-9cab-3562993a155f
	I0610 19:40:33.335766    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:33.335768    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:33.335771    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:33.335774    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:33.335778    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:33 GMT
	I0610 19:40:33.336031    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:33.833401    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:33.833422    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:33.833434    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:33.833441    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:33.835910    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:33.835926    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:33.835934    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:33.835939    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:33.835943    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:33.835947    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:33.835952    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:34 GMT
	I0610 19:40:33.835955    9512 round_trippers.go:580]     Audit-Id: 8b281346-67ef-4533-9cb6-d0e2360c633d
	I0610 19:40:33.836301    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:34.332635    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:34.332650    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:34.332656    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:34.332660    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:34.334113    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:34.334125    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:34.334133    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:34.334150    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:34.334157    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:34.334161    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:34.334164    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:34 GMT
	I0610 19:40:34.334167    9512 round_trippers.go:580]     Audit-Id: 2e908c81-3d8a-4916-b6fb-2cb083bd7ea2
	I0610 19:40:34.334308    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:34.334491    9512 node_ready.go:53] node "multinode-353000" has status "Ready":"False"
	I0610 19:40:34.832705    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:34.832716    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:34.832722    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:34.832725    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:34.834090    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:34.834099    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:34.834105    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:34.834109    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:34.834112    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:35 GMT
	I0610 19:40:34.834115    9512 round_trippers.go:580]     Audit-Id: dfe01543-e671-48d1-838e-9f6c770139bf
	I0610 19:40:34.834119    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:34.834121    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:34.834223    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:35.333137    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:35.333150    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:35.333157    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:35.333161    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:35.334564    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:35.334576    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:35.334582    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:35.334585    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:35.334594    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:35.334597    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:35 GMT
	I0610 19:40:35.334600    9512 round_trippers.go:580]     Audit-Id: 629606a0-22cf-4a00-b93e-c2e296d51872
	I0610 19:40:35.334603    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:35.334703    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:35.832454    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:35.832471    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:35.832479    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:35.832483    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:35.834366    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:35.834376    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:35.834382    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:35.834386    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:36 GMT
	I0610 19:40:35.834390    9512 round_trippers.go:580]     Audit-Id: 3b941c5b-94f6-421e-93a8-6e9941b8db78
	I0610 19:40:35.834392    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:35.834411    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:35.834417    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:35.834599    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:36.333234    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:36.333255    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:36.333265    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:36.333270    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:36.335845    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:36.335857    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:36.335864    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:36.335868    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:36.335886    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:36.335892    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:36.335896    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:36 GMT
	I0610 19:40:36.335900    9512 round_trippers.go:580]     Audit-Id: a4e6452a-e888-4444-a809-88e400d35be2
	I0610 19:40:36.336162    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:36.336420    9512 node_ready.go:53] node "multinode-353000" has status "Ready":"False"
	I0610 19:40:36.833701    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:36.833721    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:36.833732    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:36.833739    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:36.836232    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:36.836247    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:36.836261    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:36.836277    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:36.836290    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:37 GMT
	I0610 19:40:36.836299    9512 round_trippers.go:580]     Audit-Id: c3ff2c37-9625-4029-9e2f-807f627ee7ee
	I0610 19:40:36.836304    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:36.836314    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:36.836668    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:37.333133    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:37.333156    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:37.333166    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:37.333172    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:37.335779    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:37.335790    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:37.335796    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:37.335800    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:37.335805    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:37.335813    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:37 GMT
	I0610 19:40:37.335820    9512 round_trippers.go:580]     Audit-Id: f2887530-db8f-4df5-a2b1-72c219565236
	I0610 19:40:37.335827    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:37.336261    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:37.832733    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:37.832759    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:37.832772    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:37.832778    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:37.835212    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:37.835235    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:37.835245    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:37.835266    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:37.835270    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:37.835274    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:37.835278    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:38 GMT
	I0610 19:40:37.835282    9512 round_trippers.go:580]     Audit-Id: 4b7542d0-8258-49e2-9d13-2d6e50028257
	I0610 19:40:37.835530    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:38.332729    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:38.332753    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:38.332766    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:38.332772    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:38.335101    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:38.335124    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:38.335136    9512 round_trippers.go:580]     Audit-Id: aad69c9c-086d-4a4a-b41a-f6d179f83d42
	I0610 19:40:38.335157    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:38.335166    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:38.335170    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:38.335173    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:38.335176    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:38 GMT
	I0610 19:40:38.335281    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:38.832829    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:38.832854    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:38.832866    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:38.832870    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:38.835033    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:38.835046    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:38.835053    9512 round_trippers.go:580]     Audit-Id: fd999105-1247-436e-a1bb-c069092972dd
	I0610 19:40:38.835057    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:38.835068    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:38.835084    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:38.835089    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:38.835106    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:39 GMT
	I0610 19:40:38.835435    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:38.835677    9512 node_ready.go:53] node "multinode-353000" has status "Ready":"False"
	I0610 19:40:39.332515    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:39.332535    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:39.332547    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:39.332553    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:39.334500    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:39.334513    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:39.334520    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:39 GMT
	I0610 19:40:39.334523    9512 round_trippers.go:580]     Audit-Id: 51c15b41-9275-4365-9d10-ff8d0c8d27f3
	I0610 19:40:39.334528    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:39.334532    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:39.334535    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:39.334538    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:39.334770    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:39.833522    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:39.833545    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:39.833557    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:39.833564    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:39.836154    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:39.836168    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:39.836175    9512 round_trippers.go:580]     Audit-Id: a9eeff3a-9b9b-43ff-bb89-7226b5c899e5
	I0610 19:40:39.836181    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:39.836184    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:39.836203    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:39.836208    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:39.836212    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:40 GMT
	I0610 19:40:39.836325    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:40.333576    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:40.333600    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:40.333610    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:40.333618    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:40.335940    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:40.335958    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:40.335965    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:40.335971    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:40.335974    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:40.335978    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:40.335981    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:40 GMT
	I0610 19:40:40.335985    9512 round_trippers.go:580]     Audit-Id: 10658e0e-4673-45d8-8c1f-f100ee1c0c09
	I0610 19:40:40.336084    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"317","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0610 19:40:40.834233    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:40.834253    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:40.834264    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:40.834272    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:40.836750    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:40.836762    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:40.836769    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:40.836773    9512 round_trippers.go:580]     Audit-Id: 34ef8076-0561-4a14-ae1b-eebb4a52a0a6
	I0610 19:40:40.836776    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:40.836780    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:40.836784    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:40.836795    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:40.836870    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:40.837109    9512 node_ready.go:49] node "multinode-353000" has status "Ready":"True"
	I0610 19:40:40.837125    9512 node_ready.go:38] duration metric: took 8.504979908s for node "multinode-353000" to be "Ready" ...
	I0610 19:40:40.837137    9512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:40:40.837183    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:40:40.837190    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:40.837197    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:40.837201    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:40.839640    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:40.839671    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:40.839691    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:40.839703    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:40.839707    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:40.839712    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:40.839720    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:40.839728    9512 round_trippers.go:580]     Audit-Id: e250b8c3-0d58-4a3e-94d1-1bdd7521349a
	I0610 19:40:40.841156    9512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"403","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0610 19:40:40.843413    9512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:40.843457    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:40:40.843462    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:40.843468    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:40.843472    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:40.844915    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:40.844925    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:40.844932    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:40.844941    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:40.844949    9512 round_trippers.go:580]     Audit-Id: f9b99845-4dc3-4e91-9b73-6000bac55874
	I0610 19:40:40.844956    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:40.844977    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:40.844985    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:40.845178    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"403","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0610 19:40:40.845406    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:40.845413    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:40.845418    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:40.845423    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:40.846476    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:40.846484    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:40.846491    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:40.846512    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:40.846520    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:40.846525    9512 round_trippers.go:580]     Audit-Id: 4bb628c9-e9b9-4a5b-adb6-bec7cf31d932
	I0610 19:40:40.846529    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:40.846533    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:40.846656    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.343599    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:40:41.343654    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.343673    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.343679    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.345855    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:41.345871    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.345877    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.345880    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.345882    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.345885    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:41.345889    9512 round_trippers.go:580]     Audit-Id: 15093fb2-53de-4430-a8f1-5e1dbea8f80a
	I0610 19:40:41.345891    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.346021    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"403","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0610 19:40:41.346314    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:41.346321    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.346327    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.346330    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.347363    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:41.347375    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.347387    9512 round_trippers.go:580]     Audit-Id: e89ed03b-f098-475a-a856-8a4830608e39
	I0610 19:40:41.347395    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.347399    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.347403    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.347406    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.347408    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:41 GMT
	I0610 19:40:41.347568    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.844466    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:40:41.844488    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.844500    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.844509    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.847195    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:41.847209    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.847216    9512 round_trippers.go:580]     Audit-Id: 79c51451-de18-4518-b49f-674544a680fb
	I0610 19:40:41.847221    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.847224    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.847227    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.847232    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.847235    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.847587    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"419","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0610 19:40:41.847952    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:41.847962    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.847970    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.847974    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.849391    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:41.849399    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.849403    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.849407    9512 round_trippers.go:580]     Audit-Id: 13f81b31-b812-4510-aace-1ffc624bdd0a
	I0610 19:40:41.849410    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.849413    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.849416    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.849419    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.849635    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.849798    9512 pod_ready.go:92] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:41.849806    9512 pod_ready.go:81] duration metric: took 1.006414598s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.849812    9512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.849838    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:40:41.849842    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.849848    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.849851    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.851198    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:41.851205    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.851210    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.851213    9512 round_trippers.go:580]     Audit-Id: c794291d-7457-435f-88f1-513db82bd753
	I0610 19:40:41.851217    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.851220    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.851222    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.851224    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.851415    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"394","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0610 19:40:41.851641    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:41.851648    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.851653    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.851658    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.852721    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:41.852731    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.852736    9512 round_trippers.go:580]     Audit-Id: b460c893-4da3-4e96-961b-b598cab354b0
	I0610 19:40:41.852740    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.852743    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.852746    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.852749    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.852752    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.852930    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.853082    9512 pod_ready.go:92] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:41.853089    9512 pod_ready.go:81] duration metric: took 3.272947ms for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.853109    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.853144    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:40:41.853149    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.853155    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.853160    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.854298    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:40:41.854325    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.854330    9512 round_trippers.go:580]     Audit-Id: 9906d4a5-bb74-4dd0-9eff-ae389c1977c7
	I0610 19:40:41.854334    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.854336    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.854339    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.854341    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.854344    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.854534    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"396","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0610 19:40:41.854763    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:41.854769    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.854775    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.854788    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.855648    9512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:40:41.855659    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.855663    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.855666    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.855669    9512 round_trippers.go:580]     Audit-Id: 713c2692-7875-4be0-bd43-59d2c5ddeef2
	I0610 19:40:41.855671    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.855674    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.855677    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.855835    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.855993    9512 pod_ready.go:92] pod "kube-apiserver-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:41.856001    9512 pod_ready.go:81] duration metric: took 2.882657ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.856007    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.856040    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:40:41.856045    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.856050    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.856054    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.856988    9512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:40:41.856997    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.857003    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.857008    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.857012    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.857016    9512 round_trippers.go:580]     Audit-Id: 3640b1f2-faac-415f-8704-bb5f0dadfe16
	I0610 19:40:41.857020    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.857023    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.857192    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"393","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0610 19:40:41.857419    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:41.857426    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.857431    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.857435    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.858395    9512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:40:41.858411    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.858416    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.858422    9512 round_trippers.go:580]     Audit-Id: d4fb99e8-6590-4e79-832b-7864cb6e6eb7
	I0610 19:40:41.858424    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.858426    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.858429    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.858431    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.858625    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:41.858788    9512 pod_ready.go:92] pod "kube-controller-manager-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:41.858798    9512 pod_ready.go:81] duration metric: took 2.785392ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.858806    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:41.858832    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:40:41.858836    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:41.858841    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:41.858845    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:41.859846    9512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:40:41.859852    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:41.859857    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:41.859861    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:41.859864    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:41.859866    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:41.859869    9512 round_trippers.go:580]     Audit-Id: fce66d51-1faf-4731-b6f0-793bfcd532c0
	I0610 19:40:41.859871    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:41.860046    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"384","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0610 19:40:42.034439    9512 request.go:629] Waited for 174.11688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:42.034492    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:42.034502    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.034515    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.034521    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.036980    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:42.036999    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.037007    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.037012    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.037017    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.037020    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:42.037024    9512 round_trippers.go:580]     Audit-Id: 8ab306f0-504d-47ef-9da7-397bf467f050
	I0610 19:40:42.037028    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.037134    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:42.037388    9512 pod_ready.go:92] pod "kube-proxy-v7s4q" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:42.037398    9512 pod_ready.go:81] duration metric: took 178.593296ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:42.037406    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:42.234740    9512 request.go:629] Waited for 197.285224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:40:42.234826    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:40:42.234834    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.234845    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.234852    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.237464    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:42.237482    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.237489    9512 round_trippers.go:580]     Audit-Id: 76752373-8ce9-40ee-89b6-0805e88e8149
	I0610 19:40:42.237493    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.237497    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.237500    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.237504    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.237510    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:42.237735    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"395","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0610 19:40:42.434275    9512 request.go:629] Waited for 196.220031ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:42.434346    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:40:42.434355    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.434370    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.434377    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.436753    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:42.436767    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.436774    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.436779    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:42.436797    9512 round_trippers.go:580]     Audit-Id: b1623e81-2496-4a70-96ab-b952eef5cef0
	I0610 19:40:42.436804    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.436807    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.436811    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.436911    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0610 19:40:42.437173    9512 pod_ready.go:92] pod "kube-scheduler-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:40:42.437184    9512 pod_ready.go:81] duration metric: took 399.785838ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:40:42.437193    9512 pod_ready.go:38] duration metric: took 1.600097221s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:40:42.437222    9512 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:40:42.437301    9512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:40:42.449910    9512 command_runner.go:130] > 1866
	I0610 19:40:42.450038    9512 api_server.go:72] duration metric: took 10.691096691s to wait for apiserver process to appear ...
	I0610 19:40:42.450049    9512 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:40:42.450065    9512 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:40:42.453058    9512 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:40:42.453094    9512 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:40:42.453099    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.453105    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.453111    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.453570    9512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:40:42.453579    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.453584    9512 round_trippers.go:580]     Content-Length: 263
	I0610 19:40:42.453601    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:42.453607    9512 round_trippers.go:580]     Audit-Id: 1ce55750-8c89-4b9a-bdc4-d09c7a387cc1
	I0610 19:40:42.453611    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.453615    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.453618    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.453621    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.453652    9512 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:40:42.453698    9512 api_server.go:141] control plane version: v1.30.1
	I0610 19:40:42.453707    9512 api_server.go:131] duration metric: took 3.654458ms to wait for apiserver health ...
	I0610 19:40:42.453715    9512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:40:42.634393    9512 request.go:629] Waited for 180.602157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:40:42.634470    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:40:42.634480    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.634491    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.634497    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.637954    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:40:42.637975    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.637982    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.637988    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.637992    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.637997    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:42 GMT
	I0610 19:40:42.638000    9512 round_trippers.go:580]     Audit-Id: 956e13e5-9e15-435e-8d73-db34f446e368
	I0610 19:40:42.638004    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.639140    9512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"419","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0610 19:40:42.640498    9512 system_pods.go:59] 8 kube-system pods found
	I0610 19:40:42.640512    9512 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:40:42.640516    9512 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:40:42.640519    9512 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:40:42.640522    9512 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:40:42.640525    9512 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:40:42.640528    9512 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:40:42.640530    9512 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:40:42.640533    9512 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:40:42.640537    9512 system_pods.go:74] duration metric: took 186.824731ms to wait for pod list to return data ...
	I0610 19:40:42.640542    9512 default_sa.go:34] waiting for default service account to be created ...
	I0610 19:40:42.835612    9512 request.go:629] Waited for 194.99029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:40:42.835661    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:40:42.835669    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:42.835687    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:42.835694    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:42.837829    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:40:42.837845    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:42.837853    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:42.837858    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:42.837866    9512 round_trippers.go:580]     Content-Length: 261
	I0610 19:40:42.837871    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:43 GMT
	I0610 19:40:42.837875    9512 round_trippers.go:580]     Audit-Id: 0958a7d9-4602-43c7-b641-a522917b533f
	I0610 19:40:42.837878    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:42.837881    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:42.837894    9512 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"809c40cb-86f1-483d-98cc-1b46432644d5","resourceVersion":"323","creationTimestamp":"2024-06-11T02:40:31Z"}}]}
	I0610 19:40:42.838048    9512 default_sa.go:45] found service account: "default"
	I0610 19:40:42.838060    9512 default_sa.go:55] duration metric: took 197.519189ms for default service account to be created ...
	I0610 19:40:42.838068    9512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 19:40:43.034508    9512 request.go:629] Waited for 196.388562ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:40:43.034601    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:40:43.034612    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:43.034623    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:43.034629    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:43.038123    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:40:43.038136    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:43.038143    9512 round_trippers.go:580]     Audit-Id: 640ba2bc-ae44-4b05-b020-68f6d55f41e8
	I0610 19:40:43.038147    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:43.038151    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:43.038153    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:43.038157    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:43.038161    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:43 GMT
	I0610 19:40:43.038909    9512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"419","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0610 19:40:43.040172    9512 system_pods.go:86] 8 kube-system pods found
	I0610 19:40:43.040185    9512 system_pods.go:89] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:40:43.040189    9512 system_pods.go:89] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:40:43.040193    9512 system_pods.go:89] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:40:43.040196    9512 system_pods.go:89] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:40:43.040202    9512 system_pods.go:89] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:40:43.040205    9512 system_pods.go:89] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:40:43.040209    9512 system_pods.go:89] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:40:43.040212    9512 system_pods.go:89] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:40:43.040218    9512 system_pods.go:126] duration metric: took 202.151394ms to wait for k8s-apps to be running ...
	I0610 19:40:43.040222    9512 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 19:40:43.040278    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:40:43.051916    9512 system_svc.go:56] duration metric: took 11.689998ms WaitForService to wait for kubelet
	I0610 19:40:43.051929    9512 kubeadm.go:576] duration metric: took 11.293010011s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:40:43.051951    9512 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:40:43.235392    9512 request.go:629] Waited for 183.364225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes
	I0610 19:40:43.235494    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:40:43.235505    9512 round_trippers.go:469] Request Headers:
	I0610 19:40:43.235517    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:40:43.235524    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:40:43.238555    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:40:43.238570    9512 round_trippers.go:577] Response Headers:
	I0610 19:40:43.238577    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:40:43.238581    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:40:43.238584    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:40:43.238587    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:40:43 GMT
	I0610 19:40:43.238589    9512 round_trippers.go:580]     Audit-Id: 4fc388c4-a41f-4f28-aa54-f2e7a6cf96f0
	I0610 19:40:43.238594    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:40:43.239091    9512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"400","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0610 19:40:43.239396    9512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:40:43.239414    9512 node_conditions.go:123] node cpu capacity is 2
	I0610 19:40:43.239428    9512 node_conditions.go:105] duration metric: took 187.478286ms to run NodePressure ...
	I0610 19:40:43.239440    9512 start.go:240] waiting for startup goroutines ...
	I0610 19:40:43.239449    9512 start.go:245] waiting for cluster config update ...
	I0610 19:40:43.239460    9512 start.go:254] writing updated cluster config ...
	I0610 19:40:43.263160    9512 out.go:177] 
	I0610 19:40:43.284573    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:40:43.284678    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:40:43.307053    9512 out.go:177] * Starting "multinode-353000-m02" worker node in "multinode-353000" cluster
	I0610 19:40:43.365154    9512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:40:43.365187    9512 cache.go:56] Caching tarball of preloaded images
	I0610 19:40:43.365404    9512 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:40:43.365423    9512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:40:43.365522    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:40:43.366327    9512 start.go:360] acquireMachinesLock for multinode-353000-m02: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:40:43.366446    9512 start.go:364] duration metric: took 94.836µs to acquireMachinesLock for "multinode-353000-m02"
	I0610 19:40:43.366473    9512 start.go:93] Provisioning new machine with config: &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 19:40:43.366560    9512 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0610 19:40:43.388159    9512 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 19:40:43.388395    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:40:43.388434    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:40:43.398661    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53080
	I0610 19:40:43.398996    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:40:43.399373    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:40:43.399387    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:40:43.399607    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:40:43.399734    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:40:43.399825    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:43.399939    9512 start.go:159] libmachine.API.Create for "multinode-353000" (driver="hyperkit")
	I0610 19:40:43.399958    9512 client.go:168] LocalClient.Create starting
	I0610 19:40:43.399990    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem
	I0610 19:40:43.400029    9512 main.go:141] libmachine: Decoding PEM data...
	I0610 19:40:43.400041    9512 main.go:141] libmachine: Parsing certificate...
	I0610 19:40:43.400078    9512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem
	I0610 19:40:43.400109    9512 main.go:141] libmachine: Decoding PEM data...
	I0610 19:40:43.400119    9512 main.go:141] libmachine: Parsing certificate...
	I0610 19:40:43.400136    9512 main.go:141] libmachine: Running pre-create checks...
	I0610 19:40:43.400141    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .PreCreateCheck
	I0610 19:40:43.400215    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:43.400251    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:40:43.409172    9512 main.go:141] libmachine: Creating machine...
	I0610 19:40:43.409189    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .Create
	I0610 19:40:43.409338    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:43.409560    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | I0610 19:40:43.409318    9544 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:40:43.409670    9512 main.go:141] libmachine: (multinode-353000-m02) Downloading /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-5942/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 19:40:43.671325    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | I0610 19:40:43.671260    9544 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa...
	I0610 19:40:43.887744    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | I0610 19:40:43.887649    9544 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk...
	I0610 19:40:43.887757    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Writing magic tar header
	I0610 19:40:43.887812    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Writing SSH key tar header
	I0610 19:40:43.888583    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | I0610 19:40:43.888508    9544 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02 ...
	I0610 19:40:44.258537    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:44.258556    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid
	I0610 19:40:44.258566    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Using UUID 3b15a703-00dc-45e7-88e9-620fa037ae16
	I0610 19:40:44.277584    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Generated MAC 9a:45:71:59:94:c9
	I0610 19:40:44.277616    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:40:44.277661    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00059c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:40:44.277735    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00059c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:40:44.277906    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3b15a703-00dc-45e7-88e9-620fa037ae16", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/j
enkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:40:44.277988    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3b15a703-00dc-45e7-88e9-620fa037ae16 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/mult
inode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:40:44.278016    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:40:44.280859    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 DEBUG: hyperkit: Pid is 9545
	I0610 19:40:44.281383    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 0
	I0610 19:40:44.281404    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:44.281508    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:44.282471    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:44.282572    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:40:44.282590    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:40:44.282626    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:40:44.282638    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:40:44.282658    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:40:44.282679    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:40:44.282696    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:40:44.282709    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:40:44.282730    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:40:44.282748    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:40:44.282773    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:40:44.282786    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:40:44.282800    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:40:44.282810    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:40:44.282822    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:40:44.282831    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:40:44.282838    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:40:44.282846    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:40:44.282870    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:40:44.288699    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:40:44.297093    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:40:44.297995    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:40:44.298026    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:40:44.298045    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:40:44.298074    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:40:44.682633    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:40:44.682649    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:40:44.797323    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:40:44.797344    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:40:44.797353    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:40:44.797360    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:40:44.798217    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:40:44.798232    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:44 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:40:46.283212    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 1
	I0610 19:40:46.283266    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:46.283413    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:46.284320    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:46.284379    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:40:46.284391    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:40:46.284401    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:40:46.284408    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:40:46.284438    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:40:46.284448    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:40:46.284457    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:40:46.284465    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:40:46.284475    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:40:46.284484    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:40:46.284491    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:40:46.284499    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:40:46.284506    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:40:46.284515    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:40:46.284530    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:40:46.284542    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:40:46.284553    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:40:46.284568    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:40:46.284578    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:40:48.284982    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 2
	I0610 19:40:48.285000    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:48.285085    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:48.285998    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:48.286069    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:40:48.286085    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:40:48.286097    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:40:48.286107    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:40:48.286117    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:40:48.286124    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:40:48.286132    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:40:48.286142    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:40:48.286159    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:40:48.286168    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:40:48.286183    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:40:48.286195    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:40:48.286203    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:40:48.286211    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:40:48.286219    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:40:48.286227    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:40:48.286234    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:40:48.286243    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:40:48.286270    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:40:50.139094    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:50 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:40:50.139168    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:50 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:40:50.139179    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:50 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:40:50.162642    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:40:50 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:40:50.288361    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 3
	I0610 19:40:50.288385    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:50.288533    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:50.290079    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:50.290178    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:40:50.290205    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:40:50.290263    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:40:50.290302    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:40:50.290318    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:40:50.290330    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:40:50.290349    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:40:50.290366    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:40:50.290391    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:40:50.290406    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:40:50.290418    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:40:50.290437    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:40:50.290461    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:40:50.290474    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:40:50.290483    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:40:50.290492    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:40:50.290506    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:40:50.290514    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:40:50.290526    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:40:52.291045    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 4
	I0610 19:40:52.291060    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:52.291158    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:52.291996    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:52.292051    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0610 19:40:52.292062    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:40:52.292070    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:f6:8f:54:40:a3:d8 ID:1,f6:8f:54:40:a3:d8 Lease:0x6667b8ea}
	I0610 19:40:52.292077    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:ac:70:12:18:62 ID:1,6a:ac:70:12:18:62 Lease:0x6667b8b4}
	I0610 19:40:52.292085    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:da:c9:41:41:9c:2c ID:1,da:c9:41:41:9c:2c Lease:0x666909e0}
	I0610 19:40:52.292093    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4a:6e:19:f1:d5:2f ID:1,4a:6e:19:f1:d5:2f Lease:0x666909b8}
	I0610 19:40:52.292101    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:4e:fd:58:36:64:bd ID:1,4e:fd:58:36:64:bd Lease:0x66690976}
	I0610 19:40:52.292106    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:5e:c7:82:72:8d:56 ID:1,5e:c7:82:72:8d:56 Lease:0x6667b7eb}
	I0610 19:40:52.292114    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:42:60:54:45:36:da ID:1,42:60:54:45:36:da Lease:0x66690630}
	I0610 19:40:52.292121    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ee:1c:9b:ec:b1:99 ID:1,ee:1c:9b:ec:b1:99 Lease:0x6667b295}
	I0610 19:40:52.292129    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:b2:9:95:14:e0:7b ID:1,b2:9:95:14:e0:7b Lease:0x66690610}
	I0610 19:40:52.292138    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:76:38:7e:2b:fe:41 ID:1,76:38:7e:2b:fe:41 Lease:0x666905fe}
	I0610 19:40:52.292146    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:c2:24:df:29:42:86 ID:1,c2:24:df:29:42:86 Lease:0x6669008b}
	I0610 19:40:52.292161    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:ca:ed:6c:b5:31:b5 ID:1,ca:ed:6c:b5:31:b5 Lease:0x6668ffc3}
	I0610 19:40:52.292169    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:9a:f8:ad:2:8c:c7 ID:1,9a:f8:ad:2:8c:c7 Lease:0x6668ff72}
	I0610 19:40:52.292176    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:26:f1:d1:5f:34:ec ID:1,26:f1:d1:5f:34:ec Lease:0x6668e6ac}
	I0610 19:40:52.292184    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:de:e6:30:86:70:77 ID:1,de:e6:30:86:70:77 Lease:0x6668d03a}
	I0610 19:40:52.292197    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:1f:35:5e:30:7f ID:1,2e:1f:35:5e:30:7f Lease:0x6668b84e}
	I0610 19:40:52.292205    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x6668e8ae}
	I0610 19:40:54.292579    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 5
	I0610 19:40:54.292605    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:54.292780    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:54.294290    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:40:54.294361    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0610 19:40:54.294419    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:40:54.294449    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:40:54.294455    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | Found match: 9a:45:71:59:94:c9
	I0610 19:40:54.294481    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | IP: 192.169.0.20
	I0610 19:40:54.295192    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:54.295338    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:54.295470    9512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 19:40:54.295482    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:40:54.295598    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:40:54.295678    9512 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:40:54.296693    9512 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 19:40:54.296701    9512 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 19:40:54.296705    9512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 19:40:54.296710    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:54.296795    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:54.296882    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:54.296967    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:54.297039    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:54.297150    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:54.297323    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:54.297330    9512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 19:40:54.315882    9512 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0610 19:40:57.373946    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:40:57.373959    9512 main.go:141] libmachine: Detecting the provisioner...
	I0610 19:40:57.373965    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.374108    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.374201    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.374295    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.374379    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.374506    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.374651    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.374659    9512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 19:40:57.431342    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 19:40:57.431381    9512 main.go:141] libmachine: found compatible host: buildroot
	I0610 19:40:57.431386    9512 main.go:141] libmachine: Provisioning with buildroot...
	I0610 19:40:57.431403    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:40:57.431545    9512 buildroot.go:166] provisioning hostname "multinode-353000-m02"
	I0610 19:40:57.431557    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:40:57.431643    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.431726    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.431813    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.431899    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.431991    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.432134    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.432275    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.432284    9512 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000-m02 && echo "multinode-353000-m02" | sudo tee /etc/hostname
	I0610 19:40:57.499666    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m02
	
	I0610 19:40:57.499681    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.499813    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.499912    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.500002    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.500099    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.500234    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.500382    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.500393    9512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:40:57.562123    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:40:57.562142    9512 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:40:57.562161    9512 buildroot.go:174] setting up certificates
	I0610 19:40:57.562169    9512 provision.go:84] configureAuth start
	I0610 19:40:57.562175    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:40:57.562299    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:40:57.562402    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.562490    9512 provision.go:143] copyHostCerts
	I0610 19:40:57.562522    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:40:57.562572    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:40:57.562578    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:40:57.562700    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:40:57.562902    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:40:57.562934    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:40:57.562939    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:40:57.563008    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:40:57.563154    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:40:57.563184    9512 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:40:57.563188    9512 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:40:57.563255    9512 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:40:57.563405    9512 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m02 san=[127.0.0.1 192.169.0.20 localhost minikube multinode-353000-m02]
	I0610 19:40:57.661526    9512 provision.go:177] copyRemoteCerts
	I0610 19:40:57.661606    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:40:57.661638    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.661837    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.661973    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.662160    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.662299    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:40:57.698702    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:40:57.698771    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 19:40:57.719085    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:40:57.719165    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 19:40:57.738809    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:40:57.738875    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:40:57.758695    9512 provision.go:87] duration metric: took 196.518476ms to configureAuth
	I0610 19:40:57.758709    9512 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:40:57.758844    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:40:57.758858    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:57.758999    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.759088    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.759178    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.759267    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.759358    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.759474    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.759601    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.759608    9512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:40:57.817803    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:40:57.817814    9512 buildroot.go:70] root file system type: tmpfs
	I0610 19:40:57.817911    9512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:40:57.817924    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.818052    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.818142    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.818229    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.818328    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.818472    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.818616    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.818658    9512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.19"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:40:57.888805    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.19
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:40:57.888828    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:57.888957    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:57.889048    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.889148    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:57.889234    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:57.889352    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:57.889492    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:57.889504    9512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:40:59.419022    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:40:59.419037    9512 main.go:141] libmachine: Checking connection to Docker...
	I0610 19:40:59.419044    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetURL
	I0610 19:40:59.419214    9512 main.go:141] libmachine: Docker is up and running!
	I0610 19:40:59.419222    9512 main.go:141] libmachine: Reticulating splines...
	I0610 19:40:59.419226    9512 client.go:171] duration metric: took 16.019819643s to LocalClient.Create
	I0610 19:40:59.419237    9512 start.go:167] duration metric: took 16.019856631s to libmachine.API.Create "multinode-353000"
	I0610 19:40:59.419243    9512 start.go:293] postStartSetup for "multinode-353000-m02" (driver="hyperkit")
	I0610 19:40:59.419249    9512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:40:59.419265    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:59.419412    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:40:59.419428    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:59.419688    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:59.419864    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:59.420069    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:59.420187    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:40:59.462223    9512 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:40:59.466564    9512 command_runner.go:130] > NAME=Buildroot
	I0610 19:40:59.466574    9512 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:40:59.466578    9512 command_runner.go:130] > ID=buildroot
	I0610 19:40:59.466582    9512 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:40:59.466586    9512 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:40:59.466696    9512 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:40:59.466705    9512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:40:59.466810    9512 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:40:59.466997    9512 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:40:59.467003    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:40:59.467214    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:40:59.477549    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:40:59.510893    9512 start.go:296] duration metric: took 91.645399ms for postStartSetup
	I0610 19:40:59.510918    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:40:59.596538    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:40:59.596947    9512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:40:59.599876    9512 start.go:128] duration metric: took 16.233867004s to createHost
	I0610 19:40:59.599905    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:59.600078    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:59.600193    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:59.600286    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:59.600384    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:59.600497    9512 main.go:141] libmachine: Using SSH client type: native
	I0610 19:40:59.600623    9512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc572f00] 0xc575c60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:40:59.600630    9512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:40:59.659115    9512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073658.952292449
	
	I0610 19:40:59.659126    9512 fix.go:216] guest clock: 1718073658.952292449
	I0610 19:40:59.659131    9512 fix.go:229] Guest: 2024-06-10 19:40:58.952292449 -0700 PDT Remote: 2024-06-10 19:40:59.599893 -0700 PDT m=+79.132343070 (delta=-647.600551ms)
	I0610 19:40:59.659146    9512 fix.go:200] guest clock delta is within tolerance: -647.600551ms
	I0610 19:40:59.659150    9512 start.go:83] releasing machines lock for "multinode-353000-m02", held for 16.293259285s
	I0610 19:40:59.659169    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:59.659301    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:40:59.684216    9512 out.go:177] * Found network options:
	I0610 19:40:59.705882    9512 out.go:177]   - NO_PROXY=192.169.0.19
	W0610 19:40:59.726605    9512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:40:59.726641    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:59.727098    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:59.727236    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:40:59.727310    9512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:40:59.727335    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	W0610 19:40:59.727387    9512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:40:59.727449    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:59.727456    9512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:40:59.727468    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:40:59.727570    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:59.727588    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:40:59.727702    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:59.727712    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:40:59.727822    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:40:59.727871    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:40:59.727930    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:40:59.765024    9512 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:40:59.765100    9512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:40:59.765157    9512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:40:59.817628    9512 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:40:59.817817    9512 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:40:59.817846    9512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:40:59.817857    9512 start.go:494] detecting cgroup driver to use...
	I0610 19:40:59.817963    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:40:59.833726    9512 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:40:59.833942    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:40:59.842161    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:40:59.850678    9512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:40:59.850793    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:40:59.859590    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:40:59.867981    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:40:59.876087    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:40:59.884265    9512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:40:59.892864    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:40:59.901203    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:40:59.909401    9512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:40:59.918041    9512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:40:59.925310    9512 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:40:59.925426    9512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:40:59.932988    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:00.026496    9512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:41:00.046796    9512 start.go:494] detecting cgroup driver to use...
	I0610 19:41:00.046868    9512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:41:00.060897    9512 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:41:00.061061    9512 command_runner.go:130] > [Unit]
	I0610 19:41:00.061071    9512 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:41:00.061076    9512 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:41:00.061093    9512 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:41:00.061102    9512 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:41:00.061107    9512 command_runner.go:130] > StartLimitBurst=3
	I0610 19:41:00.061112    9512 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:41:00.061115    9512 command_runner.go:130] > [Service]
	I0610 19:41:00.061119    9512 command_runner.go:130] > Type=notify
	I0610 19:41:00.061122    9512 command_runner.go:130] > Restart=on-failure
	I0610 19:41:00.061126    9512 command_runner.go:130] > Environment=NO_PROXY=192.169.0.19
	I0610 19:41:00.061132    9512 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:41:00.061143    9512 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:41:00.061149    9512 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:41:00.061155    9512 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:41:00.061161    9512 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:41:00.061166    9512 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:41:00.061173    9512 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:41:00.061188    9512 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:41:00.061194    9512 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:41:00.061197    9512 command_runner.go:130] > ExecStart=
	I0610 19:41:00.061229    9512 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:41:00.061238    9512 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:41:00.061252    9512 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:41:00.061258    9512 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:41:00.061262    9512 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:41:00.061269    9512 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:41:00.061272    9512 command_runner.go:130] > LimitCORE=infinity
	I0610 19:41:00.061277    9512 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:41:00.061281    9512 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:41:00.061284    9512 command_runner.go:130] > TasksMax=infinity
	I0610 19:41:00.061288    9512 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:41:00.061293    9512 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:41:00.061297    9512 command_runner.go:130] > Delegate=yes
	I0610 19:41:00.061302    9512 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:41:00.061309    9512 command_runner.go:130] > KillMode=process
	I0610 19:41:00.061314    9512 command_runner.go:130] > [Install]
	I0610 19:41:00.061317    9512 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:41:00.061412    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:41:00.077007    9512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:41:00.095607    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:41:00.106846    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:41:00.117275    9512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:41:00.141576    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:41:00.152003    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:41:00.167204    9512 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:41:00.167478    9512 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:41:00.170237    9512 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:41:00.170377    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:41:00.177626    9512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:41:00.191158    9512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:41:00.293172    9512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:41:00.391268    9512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:41:00.391293    9512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:41:00.405851    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:00.498872    9512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:41:02.751744    9512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.252928697s)
	I0610 19:41:02.751809    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 19:41:02.763372    9512 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 19:41:02.776706    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:41:02.786893    9512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 19:41:02.897316    9512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 19:41:03.004228    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:03.115067    9512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 19:41:03.128879    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:41:03.140866    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:03.257179    9512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 19:41:03.317532    9512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 19:41:03.317609    9512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 19:41:03.321822    9512 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 19:41:03.321837    9512 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 19:41:03.321843    9512 command_runner.go:130] > Device: 0,22	Inode: 799         Links: 1
	I0610 19:41:03.321849    9512 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 19:41:03.321855    9512 command_runner.go:130] > Access: 2024-06-11 02:41:02.564340416 +0000
	I0610 19:41:03.321868    9512 command_runner.go:130] > Modify: 2024-06-11 02:41:02.564340416 +0000
	I0610 19:41:03.321875    9512 command_runner.go:130] > Change: 2024-06-11 02:41:02.566340416 +0000
	I0610 19:41:03.321880    9512 command_runner.go:130] >  Birth: -
	I0610 19:41:03.322091    9512 start.go:562] Will wait 60s for crictl version
	I0610 19:41:03.322141    9512 ssh_runner.go:195] Run: which crictl
	I0610 19:41:03.325789    9512 command_runner.go:130] > /usr/bin/crictl
	I0610 19:41:03.325902    9512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 19:41:03.350203    9512 command_runner.go:130] > Version:  0.1.0
	I0610 19:41:03.350321    9512 command_runner.go:130] > RuntimeName:  docker
	I0610 19:41:03.350355    9512 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 19:41:03.350419    9512 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 19:41:03.351474    9512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 19:41:03.351542    9512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:41:03.366604    9512 command_runner.go:130] > 26.1.4
	I0610 19:41:03.367417    9512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:41:03.384122    9512 command_runner.go:130] > 26.1.4
	I0610 19:41:03.412222    9512 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 19:41:03.455859    9512 out.go:177]   - env NO_PROXY=192.169.0.19
	I0610 19:41:03.481911    9512 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:41:03.482285    9512 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0610 19:41:03.486817    9512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:41:03.497025    9512 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:41:03.497171    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:41:03.497399    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:41:03.497421    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:41:03.506238    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53104
	I0610 19:41:03.506566    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:41:03.506891    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:41:03.506903    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:41:03.507135    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:41:03.507255    9512 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:41:03.507341    9512 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:41:03.507426    9512 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:41:03.508452    9512 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:41:03.508718    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:41:03.508744    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:41:03.517292    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53106
	I0610 19:41:03.517615    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:41:03.517964    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:41:03.517982    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:41:03.518218    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:41:03.518343    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:41:03.518436    9512 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000 for IP: 192.169.0.20
	I0610 19:41:03.518442    9512 certs.go:194] generating shared ca certs ...
	I0610 19:41:03.518455    9512 certs.go:226] acquiring lock for ca certs: {Name:mkb8782270d93d160af8329e99f7f211e7b6b737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:41:03.518642    9512 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key
	I0610 19:41:03.518718    9512 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key
	I0610 19:41:03.518728    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 19:41:03.518751    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 19:41:03.518770    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 19:41:03.518787    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 19:41:03.518883    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem (1338 bytes)
	W0610 19:41:03.518933    9512 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485_empty.pem, impossibly tiny 0 bytes
	I0610 19:41:03.518943    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 19:41:03.518992    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem (1082 bytes)
	I0610 19:41:03.519032    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem (1123 bytes)
	I0610 19:41:03.519077    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem (1679 bytes)
	I0610 19:41:03.519173    9512 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:41:03.519211    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem -> /usr/share/ca-certificates/6485.pem
	I0610 19:41:03.519232    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /usr/share/ca-certificates/64852.pem
	I0610 19:41:03.519249    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:41:03.519277    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 19:41:03.538849    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 19:41:03.558262    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 19:41:03.577482    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 19:41:03.597014    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem --> /usr/share/ca-certificates/6485.pem (1338 bytes)
	I0610 19:41:03.616094    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /usr/share/ca-certificates/64852.pem (1708 bytes)
	I0610 19:41:03.635069    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 19:41:03.654409    9512 ssh_runner.go:195] Run: openssl version
	I0610 19:41:03.658519    9512 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 19:41:03.658735    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6485.pem && ln -fs /usr/share/ca-certificates/6485.pem /etc/ssl/certs/6485.pem"
	I0610 19:41:03.667961    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6485.pem
	I0610 19:41:03.671228    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:41:03.671350    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:41:03.671389    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6485.pem
	I0610 19:41:03.675399    9512 command_runner.go:130] > 51391683
	I0610 19:41:03.675603    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6485.pem /etc/ssl/certs/51391683.0"
	I0610 19:41:03.684642    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64852.pem && ln -fs /usr/share/ca-certificates/64852.pem /etc/ssl/certs/64852.pem"
	I0610 19:41:03.693719    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64852.pem
	I0610 19:41:03.696987    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:41:03.697087    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:41:03.697122    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64852.pem
	I0610 19:41:03.701863    9512 command_runner.go:130] > 3ec20f2e
	I0610 19:41:03.702022    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 19:41:03.711212    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 19:41:03.720269    9512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:41:03.723560    9512 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:41:03.723705    9512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:41:03.723793    9512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:41:03.728057    9512 command_runner.go:130] > b5213941
	I0610 19:41:03.728231    9512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 19:41:03.737269    9512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:41:03.740423    9512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 19:41:03.740446    9512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 19:41:03.740475    9512 kubeadm.go:928] updating node {m02 192.169.0.20 8443 v1.30.1 docker false true} ...
	I0610 19:41:03.740536    9512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 19:41:03.740576    9512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 19:41:03.748515    9512 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	I0610 19:41:03.748596    9512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 19:41:03.748634    9512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 19:41:03.756689    9512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 19:41:03.756690    9512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 19:41:03.756694    9512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 19:41:03.756705    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 19:41:03.756705    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 19:41:03.756746    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:41:03.756795    9512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 19:41:03.756799    9512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 19:41:03.763536    9512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 19:41:03.763560    9512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 19:41:03.763577    9512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 19:41:03.763593    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 19:41:03.763602    9512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 19:41:03.763617    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 19:41:03.782641    9512 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 19:41:03.782818    9512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 19:41:03.828808    9512 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 19:41:03.828843    9512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 19:41:03.828872    9512 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 19:41:04.425171    9512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 19:41:04.432413    9512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0610 19:41:04.446103    9512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 19:41:04.459612    9512 ssh_runner.go:195] Run: grep 192.169.0.19	control-plane.minikube.internal$ /etc/hosts
	I0610 19:41:04.462708    9512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:41:04.472161    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:04.568678    9512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:41:04.584608    9512 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:41:04.584920    9512 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:41:04.584949    9512 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:41:04.594266    9512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53108
	I0610 19:41:04.594615    9512 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:41:04.594937    9512 main.go:141] libmachine: Using API Version  1
	I0610 19:41:04.594949    9512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:41:04.595189    9512 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:41:04.595323    9512 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:41:04.595412    9512 start.go:316] joinCluster: &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:41:04.595496    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 19:41:04.595508    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:41:04.595590    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:41:04.595674    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:41:04.595773    9512 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:41:04.595850    9512 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:41:04.677862    9512 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hfijiu.wf8y60btrxeqj5e5 --discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db 
	I0610 19:41:04.677915    9512 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 19:41:04.677939    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hfijiu.wf8y60btrxeqj5e5 --discovery-token-ca-cert-hash sha256:0232f6cacb3f166e73c433a72eddce5ba032fbcbff82650ad59364c6df0629db --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-353000-m02"
	I0610 19:41:04.790976    9512 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 19:41:05.449779    9512 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 19:41:05.449794    9512 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0610 19:41:05.449802    9512 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0610 19:41:05.449808    9512 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:41:05.449814    9512 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:41:05.449825    9512 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 19:41:05.449846    9512 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 19:41:05.449861    9512 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.776979ms
	I0610 19:41:05.449872    9512 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0610 19:41:05.449876    9512 command_runner.go:130] > This node has joined the cluster:
	I0610 19:41:05.449882    9512 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0610 19:41:05.449886    9512 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0610 19:41:05.449894    9512 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0610 19:41:05.449921    9512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 19:41:05.690379    9512 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0610 19:41:05.690533    9512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-353000-m02 minikube.k8s.io/updated_at=2024_06_10T19_41_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=multinode-353000 minikube.k8s.io/primary=false
	I0610 19:41:05.752399    9512 command_runner.go:130] > node/multinode-353000-m02 labeled
	I0610 19:41:05.753639    9512 start.go:318] duration metric: took 1.158245828s to joinCluster
	I0610 19:41:05.753724    9512 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0610 19:41:05.776736    9512 out.go:177] * Verifying Kubernetes components...
	I0610 19:41:05.753964    9512 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:41:05.837278    9512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:41:05.946823    9512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:41:05.958513    9512 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:41:05.958732    9512 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xda10600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:41:05.958931    9512 node_ready.go:35] waiting up to 6m0s for node "multinode-353000-m02" to be "Ready" ...
	I0610 19:41:05.958973    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:05.958978    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:05.958983    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:05.958987    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:05.961074    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:05.961088    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:05.961093    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:05.961096    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:05.961099    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:05.961102    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:05.961105    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:06 GMT
	I0610 19:41:05.961107    9512 round_trippers.go:580]     Audit-Id: c2c3f582-8401-4360-aedd-bd57bfe053df
	I0610 19:41:05.961110    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:05.961161    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:06.460722    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:06.460739    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:06.460748    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:06.460753    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:06.463108    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:06.463116    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:06.463121    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:06.463125    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:06.463128    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:06.463131    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:06.463134    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:06.463140    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:06 GMT
	I0610 19:41:06.463142    9512 round_trippers.go:580]     Audit-Id: eeaa5f3f-0e4a-4313-853f-fb59eeb084c4
	I0610 19:41:06.463238    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:06.959999    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:06.960020    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:06.960032    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:06.960042    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:06.962170    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:06.962187    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:06.962196    9512 round_trippers.go:580]     Audit-Id: e6a6cd32-3d1c-4fa3-89da-f381790bf979
	I0610 19:41:06.962202    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:06.962206    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:06.962209    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:06.962212    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:06.962215    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:06.962220    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:07 GMT
	I0610 19:41:06.962260    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:07.460122    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:07.460143    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:07.460155    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:07.460161    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:07.462604    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:07.462622    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:07.462629    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:07.462635    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:07.462639    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:07.462642    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:07.462645    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:07.462649    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:07 GMT
	I0610 19:41:07.462652    9512 round_trippers.go:580]     Audit-Id: b8f07f56-091f-435a-8161-4ed42d6c331a
	I0610 19:41:07.462754    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:07.961155    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:07.961178    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:07.961206    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:07.961215    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:07.964105    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:07.964120    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:07.964127    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:07.964132    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:08 GMT
	I0610 19:41:07.964145    9512 round_trippers.go:580]     Audit-Id: b31f0eb0-1422-4621-a02f-e51822a5f34b
	I0610 19:41:07.964154    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:07.964159    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:07.964163    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:07.964166    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:07.964236    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:07.964426    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:08.459033    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:08.459048    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:08.459054    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:08.459058    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:08.460589    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:08.460602    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:08.460607    9512 round_trippers.go:580]     Audit-Id: cd3202f1-5287-4ba7-a823-2364aa66ff9d
	I0610 19:41:08.460610    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:08.460613    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:08.460615    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:08.460617    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:08.460620    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:08.460622    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:08 GMT
	I0610 19:41:08.460671    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:08.960089    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:08.960111    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:08.960124    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:08.960130    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:08.962683    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:08.962704    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:08.962712    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:09 GMT
	I0610 19:41:08.962717    9512 round_trippers.go:580]     Audit-Id: b7d5e41f-f70d-4453-b569-0d8a169811ac
	I0610 19:41:08.962721    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:08.962726    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:08.962731    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:08.962747    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:08.962753    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:08.962815    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:09.459631    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:09.459655    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:09.459668    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:09.459673    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:09.462521    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:09.462537    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:09.462545    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:09 GMT
	I0610 19:41:09.462549    9512 round_trippers.go:580]     Audit-Id: e27d0ede-87c1-4859-aea2-ed5bc5fc6c09
	I0610 19:41:09.462553    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:09.462558    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:09.462562    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:09.462566    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:09.462569    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:09.462649    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:09.959920    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:09.959943    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:09.959956    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:09.959964    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:09.963144    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:09.963161    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:09.963168    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:10 GMT
	I0610 19:41:09.963171    9512 round_trippers.go:580]     Audit-Id: f18fa249-3721-4d9a-bfb6-620c5d7e89cb
	I0610 19:41:09.963175    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:09.963178    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:09.963181    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:09.963183    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:09.963187    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:09.963269    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:10.459327    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:10.459352    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:10.459363    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:10.459370    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:10.461961    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:10.461978    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:10.461986    9512 round_trippers.go:580]     Audit-Id: 37ecd113-d28d-4579-bd12-1fb8bf35e44f
	I0610 19:41:10.461993    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:10.461999    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:10.462005    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:10.462012    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:10.462024    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:10.462031    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:10 GMT
	I0610 19:41:10.462120    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:10.462329    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:10.959895    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:10.959923    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:10.959952    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:10.959961    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:10.962716    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:10.962732    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:10.962739    9512 round_trippers.go:580]     Audit-Id: a81c5e33-075b-4595-8dfd-f46a5d3866de
	I0610 19:41:10.962747    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:10.962751    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:10.962755    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:10.962758    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:10.962761    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:10.962764    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:11 GMT
	I0610 19:41:10.962830    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:11.459842    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:11.459865    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:11.459911    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:11.459918    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:11.462515    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:11.462531    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:11.462539    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:11.462543    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:11.462547    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:11.462551    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:11.462555    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:11.462559    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:11 GMT
	I0610 19:41:11.462563    9512 round_trippers.go:580]     Audit-Id: 041411e4-98a2-4dd5-9a76-f81571c45b43
	I0610 19:41:11.462638    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:11.959443    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:11.959467    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:11.959478    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:11.959504    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:11.963101    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:11.963117    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:11.963124    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:11.963129    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:11.963133    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:12 GMT
	I0610 19:41:11.963137    9512 round_trippers.go:580]     Audit-Id: 3ec48922-77e4-49a1-b40a-8b865fa71c7a
	I0610 19:41:11.963140    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:11.963144    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:11.963158    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:11.963254    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:12.459893    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:12.459907    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:12.459911    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:12.459914    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:12.462176    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:12.462189    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:12.462197    9512 round_trippers.go:580]     Audit-Id: 0edb7e3c-01d9-47de-b2ca-da2f893c3923
	I0610 19:41:12.462202    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:12.462206    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:12.462210    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:12.462213    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:12.462215    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:12.462217    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:12 GMT
	I0610 19:41:12.462269    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:12.462426    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:12.960134    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:12.960150    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:12.960156    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:12.960159    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:12.961871    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:12.961884    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:12.961889    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:13 GMT
	I0610 19:41:12.961893    9512 round_trippers.go:580]     Audit-Id: 050d7d80-08f1-4e34-b07a-c6b040dda417
	I0610 19:41:12.961897    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:12.961900    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:12.961904    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:12.961906    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:12.961908    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:12.961954    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:13.459008    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:13.459027    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:13.459051    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:13.459054    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:13.461150    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:13.461164    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:13.461173    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:13.461177    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:13.461185    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:13.461193    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:13.461196    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:13.461199    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:13 GMT
	I0610 19:41:13.461200    9512 round_trippers.go:580]     Audit-Id: 0b06f55a-3861-45c0-a038-d4a19c7db6b3
	I0610 19:41:13.461256    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:13.958876    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:13.958901    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:13.958914    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:13.958922    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:13.961710    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:13.961731    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:13.961738    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:13.961742    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:13.961747    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:13.961752    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:14 GMT
	I0610 19:41:13.961756    9512 round_trippers.go:580]     Audit-Id: 7f81fdb9-41ab-44a1-8bba-b4a0eac488b4
	I0610 19:41:13.961760    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:13.961764    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:13.961810    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:14.460227    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:14.460248    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:14.460262    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:14.460266    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:14.462137    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:14.462146    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:14.462151    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:14.462154    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:14.462157    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:14.462159    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:14 GMT
	I0610 19:41:14.462161    9512 round_trippers.go:580]     Audit-Id: 6c60372a-7235-4306-aa3f-10210d421f9a
	I0610 19:41:14.462164    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:14.462166    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:14.462214    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:14.959242    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:14.959260    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:14.959269    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:14.959274    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:14.961121    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:14.961134    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:14.961139    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:14.961143    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:15 GMT
	I0610 19:41:14.961145    9512 round_trippers.go:580]     Audit-Id: 8a8de5e8-3d4f-41b1-8504-31004f58c98e
	I0610 19:41:14.961148    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:14.961151    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:14.961153    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:14.961156    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:14.961201    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:14.961347    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:15.459289    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:15.459306    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:15.459313    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:15.459327    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:15.461358    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:15.461381    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:15.461390    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:15.461395    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:15.461400    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:15.461404    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:15.461408    9512 round_trippers.go:580]     Content-Length: 4087
	I0610 19:41:15.461425    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:15 GMT
	I0610 19:41:15.461430    9512 round_trippers.go:580]     Audit-Id: 0d04125e-3f0d-44b2-8f2c-7ecc80f96d96
	I0610 19:41:15.461512    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"474","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3063 chars]
	I0610 19:41:15.958725    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:15.958741    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:15.958748    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:15.958752    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:15.960565    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:15.960579    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:15.960586    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:15.960591    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:15.960602    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:15.960608    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:16 GMT
	I0610 19:41:15.960611    9512 round_trippers.go:580]     Audit-Id: 234b7cb0-0e36-40d4-8c89-23e2dc290cd5
	I0610 19:41:15.960615    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:15.960737    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:16.459797    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:16.459811    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:16.459818    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:16.459822    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:16.461545    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:16.461577    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:16.461583    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:16.461586    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:16.461588    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:16.461591    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:16 GMT
	I0610 19:41:16.461594    9512 round_trippers.go:580]     Audit-Id: 38c1dbef-85c5-4215-a921-3953e4231283
	I0610 19:41:16.461597    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:16.461817    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:16.959346    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:16.959362    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:16.959369    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:16.959372    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:16.960850    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:16.960861    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:16.960867    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:16.960869    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:16.960890    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:16.960893    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:16.960895    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:17 GMT
	I0610 19:41:16.960897    9512 round_trippers.go:580]     Audit-Id: f54837ab-75c2-4b9c-824c-c6ba66d2d78a
	I0610 19:41:16.961036    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:17.458995    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:17.459021    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:17.459100    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:17.459109    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:17.461680    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:17.461722    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:17.461738    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:17.461784    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:17 GMT
	I0610 19:41:17.461792    9512 round_trippers.go:580]     Audit-Id: 7fb01983-54da-48de-a857-bc8886649260
	I0610 19:41:17.461796    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:17.461800    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:17.461804    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:17.461941    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:17.462119    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:17.958949    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:17.959007    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:17.959021    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:17.959030    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:17.961436    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:17.961450    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:17.961460    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:17.961466    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:17.961472    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:18 GMT
	I0610 19:41:17.961477    9512 round_trippers.go:580]     Audit-Id: 9907dbea-4425-4c9c-9e84-33420c0f823c
	I0610 19:41:17.961484    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:17.961491    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:17.961684    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:18.458781    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:18.458806    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:18.458819    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:18.458824    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:18.461474    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:18.461491    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:18.461502    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:18.461528    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:18 GMT
	I0610 19:41:18.461536    9512 round_trippers.go:580]     Audit-Id: 95010a44-60ff-44f3-83b8-4661b3abc62a
	I0610 19:41:18.461540    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:18.461544    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:18.461552    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:18.461694    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:18.959336    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:18.959364    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:18.959376    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:18.959383    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:18.961899    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:18.961914    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:18.961929    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:18.961954    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:18.961967    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:18.961971    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:19 GMT
	I0610 19:41:18.961974    9512 round_trippers.go:580]     Audit-Id: 2b7b2239-4e6d-4765-b53d-3776c6468a2a
	I0610 19:41:18.961985    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:18.962094    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:19.459570    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:19.459605    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:19.459617    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:19.459622    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:19.462431    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:19.462447    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:19.462453    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:19.462458    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:19.462462    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:19 GMT
	I0610 19:41:19.462465    9512 round_trippers.go:580]     Audit-Id: 2b8510f7-8a32-4102-a293-f8d615988abf
	I0610 19:41:19.462468    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:19.462492    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:19.462712    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:19.462938    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:19.958755    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:19.958780    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:19.958792    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:19.958798    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:19.961305    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:19.961321    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:19.961329    9512 round_trippers.go:580]     Audit-Id: 52ab2c51-7a28-4f69-8297-6173f37520c6
	I0610 19:41:19.961334    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:19.961338    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:19.961342    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:19.961345    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:19.961349    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:20 GMT
	I0610 19:41:19.961689    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:20.459307    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:20.459334    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:20.459351    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:20.459416    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:20.461979    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:20.461998    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:20.462005    9512 round_trippers.go:580]     Audit-Id: b690b229-8252-48bf-8031-72147c6e235a
	I0610 19:41:20.462022    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:20.462033    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:20.462037    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:20.462040    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:20.462044    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:20 GMT
	I0610 19:41:20.462127    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:20.959520    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:20.959537    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:20.959543    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:20.959547    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:20.961031    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:20.961039    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:20.961044    9512 round_trippers.go:580]     Audit-Id: 252d4dd5-b991-4ca6-a588-cf92fbacfbdd
	I0610 19:41:20.961047    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:20.961050    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:20.961054    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:20.961058    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:20.961062    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:21 GMT
	I0610 19:41:20.961239    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:21.459098    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:21.459126    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:21.459137    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:21.459142    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:21.461601    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:21.461617    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:21.461624    9512 round_trippers.go:580]     Audit-Id: 444b090c-6e4a-48c3-ac85-1491ea9ad8a9
	I0610 19:41:21.461627    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:21.461634    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:21.461637    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:21.461650    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:21.461654    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:21 GMT
	I0610 19:41:21.461763    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:21.958987    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:21.959002    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:21.959036    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:21.959041    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:21.960555    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:21.960567    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:21.960573    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:22 GMT
	I0610 19:41:21.960575    9512 round_trippers.go:580]     Audit-Id: 553ad0de-149f-4599-8520-ea44a7b603a9
	I0610 19:41:21.960577    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:21.960581    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:21.960583    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:21.960586    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:21.960710    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:21.960887    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:22.460026    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:22.460048    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:22.460061    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:22.460068    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:22.462430    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:22.462448    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:22.462456    9512 round_trippers.go:580]     Audit-Id: 980cc1f6-bf18-4d93-b842-6cf0bb7076b8
	I0610 19:41:22.462461    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:22.462465    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:22.462470    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:22.462473    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:22.462477    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:22 GMT
	I0610 19:41:22.462592    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:22.958611    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:22.958633    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:22.958643    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:22.958648    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:22.964702    9512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 19:41:22.964712    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:22.964718    9512 round_trippers.go:580]     Audit-Id: 6a5ae52f-12fc-4c61-a629-ca21b92edb42
	I0610 19:41:22.964722    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:22.964726    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:22.964728    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:22.964730    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:22.964733    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:23 GMT
	I0610 19:41:22.964959    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:23.459408    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:23.459430    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:23.459444    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:23.459451    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:23.462129    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:23.462148    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:23.462157    9512 round_trippers.go:580]     Audit-Id: 749c8bd3-3859-41d7-b74d-8b5e751d4907
	I0610 19:41:23.462191    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:23.462201    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:23.462206    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:23.462211    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:23.462216    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:23 GMT
	I0610 19:41:23.462423    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:23.959012    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:23.959036    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:23.959048    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:23.959053    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:23.961367    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:23.961380    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:23.961387    9512 round_trippers.go:580]     Audit-Id: 30c0eb2c-ae12-4604-9c76-ef8dc0cef3d9
	I0610 19:41:23.961393    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:23.961396    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:23.961400    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:23.961404    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:23.961407    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:24 GMT
	I0610 19:41:23.961643    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:23.961852    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:24.460520    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:24.460545    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:24.460556    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:24.460565    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:24.463255    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:24.463267    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:24.463274    9512 round_trippers.go:580]     Audit-Id: 24885254-9895-4aa9-bc89-855d54b4025a
	I0610 19:41:24.463278    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:24.463282    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:24.463286    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:24.463290    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:24.463295    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:24 GMT
	I0610 19:41:24.463462    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:24.959070    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:24.959086    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:24.959094    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:24.959100    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:24.960644    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:24.960654    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:24.960660    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:24.960663    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:24.960667    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:24.960669    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:25 GMT
	I0610 19:41:24.960671    9512 round_trippers.go:580]     Audit-Id: afd0c90d-d9f5-435b-8735-5741c339b236
	I0610 19:41:24.960673    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:24.960764    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:25.459440    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:25.459465    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:25.459477    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:25.459483    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:25.462450    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:25.462471    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:25.462480    9512 round_trippers.go:580]     Audit-Id: 01753a54-4b24-4d0d-8d53-9f069b864f52
	I0610 19:41:25.462488    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:25.462492    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:25.462496    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:25.462501    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:25.462507    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:25 GMT
	I0610 19:41:25.462586    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:25.959155    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:25.959187    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:25.959199    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:25.959209    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:25.961687    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:25.961700    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:25.961707    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:25.961712    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:25.961715    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:25.961719    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:26 GMT
	I0610 19:41:25.961723    9512 round_trippers.go:580]     Audit-Id: 8349843b-ce7d-44c4-8585-ab6924408542
	I0610 19:41:25.961727    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:25.962063    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:25.962285    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:26.459239    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:26.459260    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:26.459272    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:26.459279    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:26.461926    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:26.461939    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:26.461946    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:26.461950    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:26.461955    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:26 GMT
	I0610 19:41:26.461961    9512 round_trippers.go:580]     Audit-Id: 3647e9fc-63b6-40a5-a428-95a6a8221dd6
	I0610 19:41:26.461966    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:26.461979    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:26.462295    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:26.959532    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:26.959545    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:26.959551    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:26.959560    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:26.961085    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:26.961096    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:26.961101    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:26.961104    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:26.961107    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:26.961109    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:27 GMT
	I0610 19:41:26.961112    9512 round_trippers.go:580]     Audit-Id: e1604710-75ac-492d-ad2d-93f3216b4232
	I0610 19:41:26.961118    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:26.961222    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:27.458919    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:27.458942    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:27.458953    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:27.458961    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:27.461247    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:27.461261    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:27.461268    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:27.461272    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:27.461275    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:27 GMT
	I0610 19:41:27.461280    9512 round_trippers.go:580]     Audit-Id: e341755f-7fbd-4a15-b624-147b114a73a5
	I0610 19:41:27.461283    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:27.461288    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:27.461419    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:27.958422    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:27.958445    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:27.958457    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:27.958463    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:27.961234    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:27.961248    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:27.961255    9512 round_trippers.go:580]     Audit-Id: 9eba8b49-22e6-4b5b-9512-2fd321a25931
	I0610 19:41:27.961261    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:27.961268    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:27.961271    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:27.961276    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:27.961280    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:28 GMT
	I0610 19:41:27.961427    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:28.459700    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:28.459727    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:28.459740    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:28.459746    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:28.462412    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:28.462429    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:28.462437    9512 round_trippers.go:580]     Audit-Id: c270c071-fcb5-402b-a6d8-1fcf43445b40
	I0610 19:41:28.462442    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:28.462445    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:28.462450    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:28.462453    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:28.462458    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:28 GMT
	I0610 19:41:28.462730    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:28.462950    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:28.960004    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:28.960025    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:28.960036    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:28.960042    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:28.962513    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:28.962530    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:28.962538    9512 round_trippers.go:580]     Audit-Id: 89c5f2aa-19e5-4145-a8c7-8676742b72e7
	I0610 19:41:28.962542    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:28.962545    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:28.962557    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:28.962561    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:28.962564    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:29 GMT
	I0610 19:41:28.962651    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:29.458997    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:29.459021    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:29.459032    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:29.459039    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:29.461299    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:29.461312    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:29.461319    9512 round_trippers.go:580]     Audit-Id: eb883c15-bb49-414e-8eaa-706458c4c10c
	I0610 19:41:29.461323    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:29.461328    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:29.461332    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:29.461338    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:29.461344    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:29 GMT
	I0610 19:41:29.461647    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:29.959770    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:29.959825    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:29.959841    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:29.959848    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:29.962353    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:29.962369    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:29.962378    9512 round_trippers.go:580]     Audit-Id: 3354c1a5-d229-49d7-994d-25b0555359e3
	I0610 19:41:29.962383    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:29.962387    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:29.962404    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:29.962409    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:29.962413    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:30 GMT
	I0610 19:41:29.962496    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:30.459044    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:30.459144    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:30.459157    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:30.459166    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:30.461502    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:30.461516    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:30.461523    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:30.461528    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:30 GMT
	I0610 19:41:30.461531    9512 round_trippers.go:580]     Audit-Id: bf593768-c672-4131-8a0d-e7ecc06a5356
	I0610 19:41:30.461535    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:30.461539    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:30.461542    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:30.461706    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:30.959445    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:30.959459    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:30.959465    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:30.959470    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:30.960790    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:30.960800    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:30.960806    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:30.960809    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:31 GMT
	I0610 19:41:30.960811    9512 round_trippers.go:580]     Audit-Id: 817f7cd0-5a32-4089-947c-1a93a2dbbe40
	I0610 19:41:30.960815    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:30.960818    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:30.960821    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:30.960896    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:30.961046    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:31.459423    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:31.459449    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:31.459462    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:31.459468    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:31.461920    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:31.461938    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:31.461945    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:31.461958    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:31.461966    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:31 GMT
	I0610 19:41:31.461971    9512 round_trippers.go:580]     Audit-Id: 97d07c88-b56d-48f9-8034-f6208669db18
	I0610 19:41:31.461974    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:31.461978    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:31.462290    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:31.959315    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:31.959333    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:31.959344    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:31.959353    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:31.960963    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:31.960975    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:31.960981    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:31.960983    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:32 GMT
	I0610 19:41:31.960992    9512 round_trippers.go:580]     Audit-Id: eb90c881-49fd-411b-9529-b331d028a686
	I0610 19:41:31.960996    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:31.960999    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:31.961001    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:31.961067    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:32.459076    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:32.459100    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:32.459111    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:32.459120    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:32.461704    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:32.461720    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:32.461727    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:32 GMT
	I0610 19:41:32.461732    9512 round_trippers.go:580]     Audit-Id: 028d6a53-bb34-4f30-ac8f-7a9898e2b512
	I0610 19:41:32.461743    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:32.461746    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:32.461751    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:32.461756    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:32.461890    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:32.958738    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:32.958761    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:32.958773    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:32.958780    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:32.961535    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:32.961549    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:32.961556    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:32.961562    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:32.961567    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:33 GMT
	I0610 19:41:32.961572    9512 round_trippers.go:580]     Audit-Id: 5450e4d7-1841-4409-96b8-233b8ffb1c34
	I0610 19:41:32.961586    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:32.961591    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:32.961701    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:32.961902    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:33.458750    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:33.458771    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:33.458783    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:33.458789    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:33.461377    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:33.461393    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:33.461400    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:33.461404    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:33.461438    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:33.461445    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:33.461450    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:33 GMT
	I0610 19:41:33.461453    9512 round_trippers.go:580]     Audit-Id: 0f45bb43-e165-4505-b9a2-d28b78ca9638
	I0610 19:41:33.461552    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:33.958358    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:33.958381    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:33.958392    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:33.958416    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:33.961141    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:33.961156    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:33.961164    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:34 GMT
	I0610 19:41:33.961167    9512 round_trippers.go:580]     Audit-Id: 5f5ab44f-26f2-4738-a6bc-43faaaed4087
	I0610 19:41:33.961172    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:33.961175    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:33.961179    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:33.961184    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:33.961272    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:34.459575    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:34.459593    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:34.459601    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:34.459605    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:34.461451    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:34.461463    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:34.461471    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:34.461479    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:34.461486    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:34.461494    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:34.461500    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:34 GMT
	I0610 19:41:34.461519    9512 round_trippers.go:580]     Audit-Id: 093b4928-cccb-4e56-812e-46aae289f8e4
	I0610 19:41:34.461955    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:34.958274    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:34.958295    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:34.958306    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:34.958312    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:34.960329    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:34.960347    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:34.960356    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:34.960360    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:35 GMT
	I0610 19:41:34.960363    9512 round_trippers.go:580]     Audit-Id: 2f58d7e0-3d84-4484-8c51-3def3545824e
	I0610 19:41:34.960376    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:34.960381    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:34.960385    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:34.960529    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:35.458225    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:35.458243    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:35.458252    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:35.458256    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:35.460091    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:35.460103    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:35.460112    9512 round_trippers.go:580]     Audit-Id: 6d72e6b6-2746-4a8b-aa7d-14253129fe0c
	I0610 19:41:35.460119    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:35.460125    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:35.460130    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:35.460134    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:35.460138    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:35 GMT
	I0610 19:41:35.460298    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"493","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3396 chars]
	I0610 19:41:35.460453    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:35.958954    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:35.958969    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:35.958992    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:35.958997    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:35.960714    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:35.960729    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:35.960740    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:36 GMT
	I0610 19:41:35.960748    9512 round_trippers.go:580]     Audit-Id: 922dcce7-ff84-48af-a703-f9c291df9dc1
	I0610 19:41:35.960755    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:35.960777    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:35.960785    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:35.960789    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:35.960916    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:36.458130    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:36.458149    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:36.458161    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:36.458169    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:36.460655    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:36.460672    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:36.460679    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:36.460685    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:36.460688    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:36.460692    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:36 GMT
	I0610 19:41:36.460695    9512 round_trippers.go:580]     Audit-Id: b9ecc5e4-16f2-444b-a3c6-17d7868f119f
	I0610 19:41:36.460700    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:36.460861    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:36.958473    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:36.958495    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:36.958506    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:36.958515    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:36.961473    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:36.961490    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:36.961498    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:36.961503    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:36.961508    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:36.961512    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:37 GMT
	I0610 19:41:36.961516    9512 round_trippers.go:580]     Audit-Id: 7db77749-d0e2-47a5-9c44-1d5d04388013
	I0610 19:41:36.961521    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:36.961786    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:37.459412    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:37.459436    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:37.459448    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:37.459453    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:37.461471    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:37.461484    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:37.461491    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:37.461496    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:37.461500    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:37.461504    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:37 GMT
	I0610 19:41:37.461508    9512 round_trippers.go:580]     Audit-Id: bc4c4a99-890a-4713-a762-9c73ebe22a53
	I0610 19:41:37.461512    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:37.461766    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:37.461994    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:37.959359    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:37.959556    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:37.959606    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:37.959649    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:37.962393    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:37.962409    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:37.962415    9512 round_trippers.go:580]     Audit-Id: 6e303e30-db7c-4f4e-9902-818b52c268fa
	I0610 19:41:37.962418    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:37.962421    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:37.962424    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:37.962427    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:37.962431    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:38 GMT
	I0610 19:41:37.962517    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:38.460028    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:38.460056    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:38.460073    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:38.460133    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:38.462640    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:38.462653    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:38.462661    9512 round_trippers.go:580]     Audit-Id: 3d41777d-37c3-4261-a9d5-46c71d39b5e2
	I0610 19:41:38.462665    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:38.462669    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:38.462674    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:38.462677    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:38.462680    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:38 GMT
	I0610 19:41:38.462950    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:38.958355    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:38.958376    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:38.958387    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:38.958393    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:38.961158    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:38.961172    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:38.961179    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:39 GMT
	I0610 19:41:38.961183    9512 round_trippers.go:580]     Audit-Id: d33959c6-08ad-4323-8511-88f585f2ac28
	I0610 19:41:38.961187    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:38.961191    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:38.961194    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:38.961197    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:38.961290    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:39.458081    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:39.458104    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:39.458116    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:39.458123    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:39.460193    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:39.460204    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:39.460217    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:39.460227    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:39.460234    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:39.460241    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:39.460246    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:39 GMT
	I0610 19:41:39.460249    9512 round_trippers.go:580]     Audit-Id: 11316c6f-cd75-47de-9868-f8ad34caee9c
	I0610 19:41:39.460387    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:39.959299    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:39.959325    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:39.959337    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:39.959343    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:39.961982    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:39.961997    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:39.962004    9512 round_trippers.go:580]     Audit-Id: 4f6231ff-c77e-45a4-8995-b79cd7ccbafa
	I0610 19:41:39.962008    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:39.962012    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:39.962015    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:39.962019    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:39.962022    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:40 GMT
	I0610 19:41:39.962293    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:39.962524    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:40.458959    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:40.458974    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:40.458981    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:40.458992    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:40.460351    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:40.460361    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:40.460367    9512 round_trippers.go:580]     Audit-Id: 98cb7553-d139-4d4b-a105-dc9040ab575a
	I0610 19:41:40.460372    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:40.460377    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:40.460381    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:40.460393    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:40.460397    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:40 GMT
	I0610 19:41:40.460534    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:40.958040    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:40.958067    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:40.958080    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:40.958085    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:40.961162    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:40.961175    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:40.961182    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:40.961187    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:40.961190    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:40.961194    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:41 GMT
	I0610 19:41:40.961197    9512 round_trippers.go:580]     Audit-Id: 4c8a93c6-c8bd-4fa5-904c-7f6566e67072
	I0610 19:41:40.961200    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:40.961569    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:41.458737    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:41.458753    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:41.458760    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:41.458763    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:41.460334    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:41.460344    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:41.460349    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:41.460352    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:41.460356    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:41.460359    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:41.460362    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:41 GMT
	I0610 19:41:41.460366    9512 round_trippers.go:580]     Audit-Id: 1bf7374e-fb6f-4beb-8af8-cdc48e941b16
	I0610 19:41:41.460528    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:41.958219    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:41.958240    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:41.958251    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:41.958258    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:41.960870    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:41.960882    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:41.960889    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:41.960893    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:41.960901    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:42 GMT
	I0610 19:41:41.960912    9512 round_trippers.go:580]     Audit-Id: 866580a1-87ed-49e8-ad75-a637798ffe9f
	I0610 19:41:41.960927    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:41.960938    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:41.961083    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:42.458335    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:42.458349    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:42.458355    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:42.458358    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:42.459906    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:42.459918    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:42.459925    9512 round_trippers.go:580]     Audit-Id: 6c98664b-2ca3-4888-85d4-4ddbb795cd56
	I0610 19:41:42.459931    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:42.459935    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:42.459940    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:42.459944    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:42.459948    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:42 GMT
	I0610 19:41:42.460087    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:42.460273    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:42.957984    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:42.958005    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:42.958016    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:42.958021    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:42.960250    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:42.960262    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:42.960269    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:42.960274    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:42.960279    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:42.960286    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:42.960292    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:43 GMT
	I0610 19:41:42.960297    9512 round_trippers.go:580]     Audit-Id: 03bb20d9-34bf-412c-a8b5-9f02aee70e84
	I0610 19:41:42.960458    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:43.457859    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:43.457875    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:43.457883    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:43.457890    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:43.459729    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:43.459740    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:43.459745    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:43.459749    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:43 GMT
	I0610 19:41:43.459752    9512 round_trippers.go:580]     Audit-Id: b70dfd7d-10dd-4cfe-841a-1fcbc9effabe
	I0610 19:41:43.459756    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:43.459759    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:43.459765    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:43.459832    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:43.958294    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:43.958309    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:43.958318    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:43.958324    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:43.960233    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:43.960245    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:43.960250    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:43.960253    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:43.960256    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:43.960260    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:43.960264    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:44 GMT
	I0610 19:41:43.960266    9512 round_trippers.go:580]     Audit-Id: ea0ea31c-dc5f-4e26-8b96-a8cc9ed7ad4f
	I0610 19:41:43.960560    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:44.458456    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:44.458482    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:44.458496    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:44.458502    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:44.461267    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:44.461284    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:44.461292    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:44.461296    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:44.461311    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:44 GMT
	I0610 19:41:44.461338    9512 round_trippers.go:580]     Audit-Id: 92eb7232-1d70-4e17-b6d0-d1a5c8cc0180
	I0610 19:41:44.461345    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:44.461350    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:44.461436    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:44.461690    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:44.958720    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:44.958748    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:44.958760    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:44.958772    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:44.961544    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:44.961583    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:44.961592    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:45 GMT
	I0610 19:41:44.961596    9512 round_trippers.go:580]     Audit-Id: 7b53a372-adb6-4855-ab63-16fe379a71f6
	I0610 19:41:44.961600    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:44.961604    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:44.961607    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:44.961610    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:44.961762    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:45.458028    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:45.458054    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:45.458065    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:45.458071    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:45.460581    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:45.460594    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:45.460652    9512 round_trippers.go:580]     Audit-Id: 4f2205f4-e433-4076-a09c-d1bd1db77a26
	I0610 19:41:45.460674    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:45.460681    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:45.460685    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:45.460690    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:45.460695    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:45 GMT
	I0610 19:41:45.460801    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:45.957912    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:45.957935    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:45.957947    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:45.957956    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:45.961159    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:45.961177    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:45.961184    9512 round_trippers.go:580]     Audit-Id: 1f32630f-9461-42e7-86e7-5254cb3087fd
	I0610 19:41:45.961189    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:45.961193    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:45.961197    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:45.961200    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:45.961204    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:46 GMT
	I0610 19:41:45.961608    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:46.458969    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:46.458989    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:46.459001    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:46.459009    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:46.461380    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:46.461394    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:46.461401    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:46.461406    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:46 GMT
	I0610 19:41:46.461411    9512 round_trippers.go:580]     Audit-Id: 4181c3ab-0093-4c9e-becd-6c31a2b36897
	I0610 19:41:46.461414    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:46.461417    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:46.461419    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:46.461612    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:46.461831    9512 node_ready.go:53] node "multinode-353000-m02" has status "Ready":"False"
	I0610 19:41:46.957687    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:46.957711    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:46.957723    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:46.957728    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:46.960195    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:46.960208    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:46.960215    9512 round_trippers.go:580]     Audit-Id: 4efd146e-b72f-471f-a8a8-f6474ffa1440
	I0610 19:41:46.960221    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:46.960225    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:46.960230    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:46.960234    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:46.960238    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:47 GMT
	I0610 19:41:46.960302    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:47.457757    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:47.457782    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:47.457793    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:47.457800    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:47.460389    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:47.460405    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:47.460414    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:47.460419    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:47.460424    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:47.460427    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:47.460448    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:47 GMT
	I0610 19:41:47.460456    9512 round_trippers.go:580]     Audit-Id: c4baa7a5-a9f3-4931-bd45-3358c29ac1a4
	I0610 19:41:47.460751    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:47.958058    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:47.958079    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:47.958091    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:47.958098    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:47.960558    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:47.960571    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:47.960579    9512 round_trippers.go:580]     Audit-Id: 5b05f582-9f2c-4abc-8dcb-921ab1c358a5
	I0610 19:41:47.960585    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:47.960590    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:47.960595    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:47.960598    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:47.960601    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:47.960838    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"518","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3909 chars]
	I0610 19:41:48.457725    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:48.457761    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.457774    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.457780    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.459935    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:48.459950    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.459958    9512 round_trippers.go:580]     Audit-Id: fd1bce15-39f9-4855-b890-8ad9bc61b34e
	I0610 19:41:48.459964    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.459968    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.459972    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.459975    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.459979    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.460114    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"535","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3775 chars]
	I0610 19:41:48.460337    9512 node_ready.go:49] node "multinode-353000-m02" has status "Ready":"True"
	I0610 19:41:48.460349    9512 node_ready.go:38] duration metric: took 42.50288314s for node "multinode-353000-m02" to be "Ready" ...
	I0610 19:41:48.460362    9512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:41:48.460418    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:41:48.460425    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.460433    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.460438    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.463550    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:48.463563    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.463569    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.463572    9512 round_trippers.go:580]     Audit-Id: a9bc3c1e-bce0-4ff2-b144-a4f9480059c4
	I0610 19:41:48.463580    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.463583    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.463586    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.463589    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.464410    9512 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"535"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"419","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70370 chars]
	I0610 19:41:48.466013    9512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.466057    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:41:48.466062    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.466068    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.466071    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.468199    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:48.468208    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.468213    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.468226    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.468228    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.468230    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.468234    9512 round_trippers.go:580]     Audit-Id: e894245e-eaf4-4587-8f82-3e592f81973e
	I0610 19:41:48.468237    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.468384    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"419","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0610 19:41:48.468652    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:48.468659    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.468665    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.468672    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.470547    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:48.470558    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.470565    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.470570    9512 round_trippers.go:580]     Audit-Id: 27f55002-665f-4705-96cc-912fba4654ae
	I0610 19:41:48.470574    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.470579    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.470583    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.470587    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.470719    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:48.470901    9512 pod_ready.go:92] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:48.470911    9512 pod_ready.go:81] duration metric: took 4.887674ms for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.470917    9512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.470956    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:41:48.470961    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.470977    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.470987    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.474122    9512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:41:48.474132    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.474138    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.474140    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.474143    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.474146    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.474149    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.474155    9512 round_trippers.go:580]     Audit-Id: 6ccbdc38-b162-4ee5-a5cc-21cbe4803f0a
	I0610 19:41:48.474317    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"394","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0610 19:41:48.474573    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:48.474586    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.474592    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.474597    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.475968    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:48.475986    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.475996    9512 round_trippers.go:580]     Audit-Id: 36ac8de8-59d7-421c-92d9-5874f4c47c34
	I0610 19:41:48.476001    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.476004    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.476007    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.476011    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.476014    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.476439    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:48.476618    9512 pod_ready.go:92] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:48.476626    9512 pod_ready.go:81] duration metric: took 5.705166ms for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.476636    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.476666    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:41:48.476671    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.476676    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.476680    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.479001    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:48.479018    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.479026    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.479031    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.479035    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.479039    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.479044    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.479071    9512 round_trippers.go:580]     Audit-Id: a587ba25-1c55-4526-b664-e47111d85006
	I0610 19:41:48.479189    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"396","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0610 19:41:48.479453    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:48.479460    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.479466    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.479470    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.481066    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:48.481074    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.481078    9512 round_trippers.go:580]     Audit-Id: 7014c8a9-7cbb-476d-8e78-4faa25dcea1f
	I0610 19:41:48.481080    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.481083    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.481086    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.481088    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.481091    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.481341    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:48.481518    9512 pod_ready.go:92] pod "kube-apiserver-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:48.481526    9512 pod_ready.go:81] duration metric: took 4.885276ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.481535    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.481576    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:41:48.481581    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.481587    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.481590    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.483090    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:48.483099    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.483107    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.483111    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.483114    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.483117    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.483120    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.483123    9512 round_trippers.go:580]     Audit-Id: 561e1be7-12ef-4d31-9bdb-0c67eb0f9798
	I0610 19:41:48.483370    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"393","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0610 19:41:48.483620    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:48.483626    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.483632    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.483637    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.488600    9512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:41:48.488623    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.488628    9512 round_trippers.go:580]     Audit-Id: deff9083-1614-466e-ac5f-92ea8b21c477
	I0610 19:41:48.488632    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.488636    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.488640    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.488644    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.488647    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:48 GMT
	I0610 19:41:48.488877    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:48.489074    9512 pod_ready.go:92] pod "kube-controller-manager-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:48.489083    9512 pod_ready.go:81] duration metric: took 7.543352ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.489091    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.658344    9512 request.go:629] Waited for 169.213991ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:41:48.658456    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:41:48.658469    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.658481    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.658489    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.661389    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:48.661400    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.661405    9512 round_trippers.go:580]     Audit-Id: c9d23ea5-3d46-4675-88b3-517b96b86de3
	I0610 19:41:48.661424    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.661442    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.661446    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.661451    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.661455    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:49 GMT
	I0610 19:41:48.661528    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:41:48.857937    9512 request.go:629] Waited for 196.103186ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:48.858007    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:41:48.858015    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:48.858024    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:48.858029    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:48.860219    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:48.860231    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:48.860236    9512 round_trippers.go:580]     Audit-Id: b256210d-0fae-493f-b9d8-042bd6b16e40
	I0610 19:41:48.860241    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:48.860244    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:48.860247    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:48.860254    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:48.860258    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:49 GMT
	I0610 19:41:48.860361    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"535","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3775 chars]
	I0610 19:41:48.860592    9512 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:48.860603    9512 pod_ready.go:81] duration metric: took 371.52056ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:48.860613    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:49.058393    9512 request.go:629] Waited for 197.740047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:41:49.058480    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:41:49.058522    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:49.058532    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:49.058538    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:49.061033    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:49.061051    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:49.061060    9512 round_trippers.go:580]     Audit-Id: 18c6c67c-eecd-4948-9837-8398ca23d506
	I0610 19:41:49.061065    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:49.061070    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:49.061076    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:49.061083    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:49.061087    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:49 GMT
	I0610 19:41:49.061161    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"384","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0610 19:41:49.258747    9512 request.go:629] Waited for 197.25921ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:49.258863    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:49.258874    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:49.258884    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:49.258899    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:49.261214    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:49.261229    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:49.261236    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:49.261241    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:49 GMT
	I0610 19:41:49.261246    9512 round_trippers.go:580]     Audit-Id: 3923135f-b435-44f7-aec0-774a390bb7db
	I0610 19:41:49.261249    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:49.261252    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:49.261256    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:49.261425    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:49.261681    9512 pod_ready.go:92] pod "kube-proxy-v7s4q" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:49.261693    9512 pod_ready.go:81] duration metric: took 401.087831ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:49.261702    9512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:49.457905    9512 request.go:629] Waited for 196.170922ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:41:49.457981    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:41:49.457989    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:49.457997    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:49.458002    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:49.459972    9512 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:41:49.459982    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:49.459987    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:49.459994    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:49.459997    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:49 GMT
	I0610 19:41:49.460000    9512 round_trippers.go:580]     Audit-Id: 79f3c897-23e7-4afa-b993-e32db9a4aa24
	I0610 19:41:49.460002    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:49.460005    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:49.460151    9512 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"395","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0610 19:41:49.658996    9512 request.go:629] Waited for 198.574763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:49.659106    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:41:49.659117    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:49.659129    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:49.659139    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:49.661847    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:49.661861    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:49.661869    9512 round_trippers.go:580]     Audit-Id: 43646b9d-6dd0-42a2-b8b1-d4566eef5e00
	I0610 19:41:49.661874    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:49.661899    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:49.661907    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:49.661911    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:49.661914    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:50 GMT
	I0610 19:41:49.662224    9512 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0610 19:41:49.662478    9512 pod_ready.go:92] pod "kube-scheduler-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:41:49.662489    9512 pod_ready.go:81] duration metric: took 400.795403ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:41:49.662500    9512 pod_ready.go:38] duration metric: took 1.202170802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:41:49.662516    9512 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 19:41:49.662577    9512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:41:49.674347    9512 system_svc.go:56] duration metric: took 11.828555ms WaitForService to wait for kubelet
	I0610 19:41:49.674363    9512 kubeadm.go:576] duration metric: took 43.922097524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:41:49.674381    9512 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:41:49.858105    9512 request.go:629] Waited for 183.669423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes
	I0610 19:41:49.858189    9512 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:41:49.858199    9512 round_trippers.go:469] Request Headers:
	I0610 19:41:49.858211    9512 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:41:49.858219    9512 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:41:49.861168    9512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:41:49.861184    9512 round_trippers.go:577] Response Headers:
	I0610 19:41:49.861192    9512 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:41:50 GMT
	I0610 19:41:49.861195    9512 round_trippers.go:580]     Audit-Id: 508822d4-200f-455a-979a-2876dd29b1ad
	I0610 19:41:49.861219    9512 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:41:49.861230    9512 round_trippers.go:580]     Content-Type: application/json
	I0610 19:41:49.861236    9512 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:41:49.861240    9512 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:41:49.861458    9512 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"536"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"428","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9778 chars]
	I0610 19:41:49.861854    9512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:41:49.861866    9512 node_conditions.go:123] node cpu capacity is 2
	I0610 19:41:49.861874    9512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:41:49.861881    9512 node_conditions.go:123] node cpu capacity is 2
	I0610 19:41:49.861886    9512 node_conditions.go:105] duration metric: took 187.507297ms to run NodePressure ...
	I0610 19:41:49.861896    9512 start.go:240] waiting for startup goroutines ...
	I0610 19:41:49.861923    9512 start.go:254] writing updated cluster config ...
	I0610 19:41:49.863125    9512 ssh_runner.go:195] Run: rm -f paused
	I0610 19:41:49.903814    9512 start.go:600] kubectl: 1.29.2, cluster: 1.30.1 (minor skew: 1)
	I0610 19:41:49.926137    9512 out.go:177] * Done! kubectl is now configured to use "multinode-353000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.261129734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.265709198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.265795913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.265872967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.265980510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 cri-dockerd[1082]: time="2024-06-11T02:40:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f43f6c7bede58a69fe95e7f6e4a96e4c0145bda25bab4942b195e6cc7424dde0/resolv.conf as [nameserver 192.169.0.1]"
	Jun 11 02:40:41 multinode-353000 cri-dockerd[1082]: time="2024-06-11T02:40:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5cbb1f2848836d51382d8812ba78b35ae3da32557aeb4da65870b782dca5f137/resolv.conf as [nameserver 192.169.0.1]"
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.407222086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.407320171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.407330271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.407462279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.489883482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.489937435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.490077797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:40:41 multinode-353000 dockerd[1184]: time="2024-06-11T02:40:41.490927208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:41:51 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:51.171516776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:41:51 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:51.171708776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:41:51 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:51.171722280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:41:51 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:51.171838517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:41:51 multinode-353000 cri-dockerd[1082]: time="2024-06-11T02:41:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/55c2b427ef24f4781aa11b4c7234eec05797d47c3a5c3d6986f5e7166241f05f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 11 02:41:53 multinode-353000 cri-dockerd[1082]: time="2024-06-11T02:41:53Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jun 11 02:41:53 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:53.695725326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:41:53 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:53.695798072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:41:53 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:53.695811190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:41:53 multinode-353000 dockerd[1184]: time="2024-06-11T02:41:53.696226954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c6ad13b3a78e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   55c2b427ef24f       busybox-fc5497c4f-4hdtl
	deba067632e3e       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   5cbb1f2848836       coredns-7db6d8ff4d-x984g
	130521568c691       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   f43f6c7bede58       storage-provisioner
	f854aa2e2bd31       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              4 minutes ago       Running             kindnet-cni               0                   5e434eeac16fa       kindnet-j4h99
	1b251ec109bf4       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                0                   75aef0f938fa2       kube-proxy-v7s4q
	496239ba94592       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   4479d5328ed80       etcd-multinode-353000
	4f9c6abaf085e       a52dc94f0a912                                                                                         5 minutes ago       Running             kube-scheduler            0                   2627ea28857a0       kube-scheduler-multinode-353000
	e847ea1ccea34       91be940803172                                                                                         5 minutes ago       Running             kube-apiserver            0                   4a744abd670d4       kube-apiserver-multinode-353000
	254a0e0afe628       25a1387cdab82                                                                                         5 minutes ago       Running             kube-controller-manager   0                   0e7e3b74d4e98       kube-controller-manager-multinode-353000
	
	
	==> coredns [deba067632e3] <==
	[INFO] 10.244.0.3:34294 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104089s
	[INFO] 10.244.1.2:59438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118937s
	[INFO] 10.244.1.2:54969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000067018s
	[INFO] 10.244.1.2:38029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071562s
	[INFO] 10.244.1.2:34326 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056229s
	[INFO] 10.244.1.2:53072 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077454s
	[INFO] 10.244.1.2:42751 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106879s
	[INFO] 10.244.1.2:35314 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070499s
	[INFO] 10.244.1.2:47905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037641s
	[INFO] 10.244.0.3:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080705s
	[INFO] 10.244.0.3:57627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107863s
	[INFO] 10.244.0.3:35475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031072s
	[INFO] 10.244.0.3:43687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098542s
	[INFO] 10.244.1.2:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087221s
	[INFO] 10.244.1.2:53832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099684s
	[INFO] 10.244.1.2:48880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068665s
	[INFO] 10.244.1.2:45968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057536s
	[INFO] 10.244.0.3:58843 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096021s
	[INFO] 10.244.0.3:32849 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001271s
	[INFO] 10.244.0.3:48661 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121766s
	[INFO] 10.244.0.3:42982 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000079089s
	[INFO] 10.244.1.2:53588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095171s
	[INFO] 10.244.1.2:51363 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006577s
	[INFO] 10.244.1.2:50446 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069941s
	[INFO] 10.244.1.2:58279 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000137813s
	
	
	==> describe nodes <==
	Name:               multinode-353000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T19_40_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Jun 2024 02:42:19 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Jun 2024 02:42:19 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Jun 2024 02:42:19 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Jun 2024 02:42:19 +0000   Tue, 11 Jun 2024 02:40:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.19
	  Hostname:    multinode-353000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 552fa6b4d7d740878c8ae17812191aaa
	  System UUID:                f0e94315-0000-0000-ac08-1f17bf5837e0
	  Boot ID:                    351b1f67-0330-437b-bd68-e4d9fb6509f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hdtl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 coredns-7db6d8ff4d-x984g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m41s
	  kube-system                 etcd-multinode-353000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-j4h99                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m42s
	  kube-system                 kube-apiserver-multinode-353000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-multinode-353000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-v7s4q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-scheduler-multinode-353000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m40s  kube-proxy       
	  Normal  Starting                 4m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m57s  kubelet          Node multinode-353000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s  kubelet          Node multinode-353000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s  kubelet          Node multinode-353000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m42s  node-controller  Node multinode-353000 event: Registered Node multinode-353000 in Controller
	  Normal  NodeReady                4m33s  kubelet          Node multinode-353000 status is now: NodeReady
	
	
	Name:               multinode-353000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T19_41_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:41:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:45:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:41:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:41:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:41:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:41:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.20
	  Hostname:    multinode-353000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 32bb2f108a254471a31dc67f28f9d3d4
	  System UUID:                3b1545e7-0000-0000-88e9-620fa037ae16
	  Boot ID:                    38bf82fb-0b80-495c-b710-667d6f0da6a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fznn5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kindnet-mcx2t              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m8s
	  kube-system                 kube-proxy-nz5rp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x2 over 4m9s)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x2 over 4m9s)  kubelet          Node multinode-353000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x2 over 4m9s)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node multinode-353000-m02 event: Registered Node multinode-353000-m02 in Controller
	  Normal  NodeReady                3m25s                kubelet          Node multinode-353000-m02 status is now: NodeReady
	
	
	Name:               multinode-353000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T19_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:42:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:43:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.21
	  Hostname:    multinode-353000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0b2b5ca283d4d038600d206ae5a6972
	  System UUID:                9ed34225-0000-0000-87bc-ec0cd1dc4108
	  Boot ID:                    640ea9bf-6aae-4a1d-b22c-e4c9acf51e74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mqj8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-f6tzv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet          Node multinode-353000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet          Node multinode-353000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet          Node multinode-353000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node multinode-353000-m03 event: Registered Node multinode-353000-m03 in Controller
	  Normal  NodeReady                2m12s                  kubelet          Node multinode-353000-m03 status is now: NodeReady
	  Normal  NodeNotReady             82s                    node-controller  Node multinode-353000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +2.573832] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.256550] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.941026] systemd-fstab-generator[501]: Ignoring "noauto" option for root device
	[  +0.102334] systemd-fstab-generator[513]: Ignoring "noauto" option for root device
	[  +1.715559] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.263188] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.107567] systemd-fstab-generator[849]: Ignoring "noauto" option for root device
	[  +0.122290] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
	[Jun11 02:40] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +0.101124] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.112602] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.130878] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +0.057268] kauditd_printk_skb: 252 callbacks suppressed
	[  +4.181916] systemd-fstab-generator[1169]: Ignoring "noauto" option for root device
	[  +2.214374] kauditd_printk_skb: 34 callbacks suppressed
	[  +0.296859] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[  +4.139862] systemd-fstab-generator[1544]: Ignoring "noauto" option for root device
	[  +1.103774] kauditd_printk_skb: 83 callbacks suppressed
	[  +3.938191] systemd-fstab-generator[1952]: Ignoring "noauto" option for root device
	[ +15.559756] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[  +0.120925] kauditd_printk_skb: 52 callbacks suppressed
	[  +8.948065] kauditd_printk_skb: 60 callbacks suppressed
	[Jun11 02:41] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [496239ba9459] <==
	{"level":"info","ts":"2024-06-11T02:40:12.871941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 switched to configuration voters=(1615721917670479112)"}
	{"level":"info","ts":"2024-06-11T02:40:12.872114Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","added-peer-id":"166c32860e8fd508","added-peer-peer-urls":["https://192.169.0.19:2380"]}
	{"level":"info","ts":"2024-06-11T02:40:12.87283Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-11T02:40:12.873814Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:40:12.874543Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:40:12.875064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"166c32860e8fd508","initial-advertise-peer-urls":["https://192.169.0.19:2380"],"listen-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-11T02:40:12.876488Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-11T02:40:13.416587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-11T02:40:13.416632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-11T02:40:13.416656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgPreVoteResp from 166c32860e8fd508 at term 1"}
	{"level":"info","ts":"2024-06-11T02:40:13.416849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became candidate at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.41688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgVoteResp from 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became leader at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 166c32860e8fd508 elected leader 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.420105Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"166c32860e8fd508","local-member-attributes":"{Name:multinode-353000 ClientURLs:[https://192.169.0.19:2379]}","request-path":"/0/members/166c32860e8fd508/attributes","cluster-id":"f10222c540877db9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-11T02:40:13.420141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.420334Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.420479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.422269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.19:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.42366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.426545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.426575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.443729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443804Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443841Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 02:45:13 up 5 min,  0 users,  load average: 0.32, 0.37, 0.19
	Linux multinode-353000 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f854aa2e2bd3] <==
	I0611 02:44:26.357689       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:44:36.362261       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:44:36.362327       1 main.go:227] handling current node
	I0611 02:44:36.362346       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:44:36.362358       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:44:36.362427       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:44:36.362466       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:44:46.374201       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:44:46.374235       1 main.go:227] handling current node
	I0611 02:44:46.374243       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:44:46.374247       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:44:46.374729       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:44:46.374755       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:44:56.379765       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:44:56.379800       1 main.go:227] handling current node
	I0611 02:44:56.379809       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:44:56.379813       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:44:56.380004       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:44:56.380081       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:06.387267       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:06.387415       1 main.go:227] handling current node
	I0611 02:45:06.387438       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:06.387530       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:06.387707       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:06.387767       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [e847ea1ccea3] <==
	I0611 02:40:14.612246       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0611 02:40:15.312437       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0611 02:40:15.314704       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0611 02:40:15.314732       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0611 02:40:15.679764       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0611 02:40:15.708512       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0611 02:40:15.858676       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0611 02:40:15.866508       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.19]
	I0611 02:40:15.867927       1 controller.go:615] quota admission added evaluator for: endpoints
	I0611 02:40:15.871602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0611 02:40:16.327000       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0611 02:40:16.576181       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0611 02:40:16.583448       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0611 02:40:16.589425       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0611 02:40:31.722543       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0611 02:40:31.972073       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0611 02:41:55.326804       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53134: use of closed network connection
	E0611 02:41:55.520429       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53136: use of closed network connection
	E0611 02:41:55.704725       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53138: use of closed network connection
	E0611 02:41:55.890514       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53140: use of closed network connection
	E0611 02:41:56.075029       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53142: use of closed network connection
	E0611 02:41:56.403023       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53145: use of closed network connection
	E0611 02:41:56.611283       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53147: use of closed network connection
	E0611 02:41:56.794808       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53149: use of closed network connection
	E0611 02:41:56.987769       1 conn.go:339] Error on socket receive: read tcp 192.169.0.19:8443->192.169.0.1:53151: use of closed network connection
	
	
	==> kube-controller-manager [254a0e0afe62] <==
	I0611 02:40:32.758858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.352606ms"
	I0611 02:40:32.759042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.362µs"
	I0611 02:40:40.910014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.455µs"
	I0611 02:40:40.919760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.148µs"
	I0611 02:40:41.128812       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0611 02:40:42.122795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.582µs"
	I0611 02:40:42.147670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.018989ms"
	I0611 02:40:42.147737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.798µs"
	I0611 02:41:05.726747       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m02\" does not exist"
	I0611 02:41:05.736926       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m02" podCIDRs=["10.244.1.0/24"]
	I0611 02:41:06.133872       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m02"
	I0611 02:41:48.707406       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:41:50.827299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.246398ms"
	I0611 02:41:50.836431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.08559ms"
	I0611 02:41:50.836953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.263µs"
	I0611 02:41:53.908886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.755154ms"
	I0611 02:41:53.909672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.964µs"
	I0611 02:41:54.537772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.288076ms"
	I0611 02:41:54.537833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.558µs"
	I0611 02:42:19.344515       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m03\" does not exist"
	I0611 02:42:19.344568       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:42:19.349890       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m03" podCIDRs=["10.244.2.0/24"]
	I0611 02:42:21.151832       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m03"
	I0611 02:43:01.974195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:43:51.177548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	
	
	==> kube-proxy [1b251ec109bf] <==
	I0611 02:40:32.780056       1 server_linux.go:69] "Using iptables proxy"
	I0611 02:40:32.794486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.19"]
	I0611 02:40:32.857420       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0611 02:40:32.857441       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0611 02:40:32.857452       1 server_linux.go:165] "Using iptables Proxier"
	I0611 02:40:32.859777       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0611 02:40:32.859889       1 server.go:872] "Version info" version="v1.30.1"
	I0611 02:40:32.859898       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:40:32.861522       1 config.go:192] "Starting service config controller"
	I0611 02:40:32.861557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0611 02:40:32.861607       1 config.go:101] "Starting endpoint slice config controller"
	I0611 02:40:32.861612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0611 02:40:32.862416       1 config.go:319] "Starting node config controller"
	I0611 02:40:32.862445       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0611 02:40:32.962479       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0611 02:40:32.962565       1 shared_informer.go:320] Caches are synced for service config
	I0611 02:40:32.969480       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4f9c6abaf085] <==
	W0611 02:40:14.372293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0611 02:40:14.372574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0611 02:40:14.372264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0611 02:40:14.372584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:14.372745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:14.372819       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.182489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:15.182664       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.203927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0611 02:40:15.203983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0611 02:40:15.281257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0611 02:40:15.281362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:15.290251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0611 02:40:15.290425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0611 02:40:15.336462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.336589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.431159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0611 02:40:15.431203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0611 02:40:15.442927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.442968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.494146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.494219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.551457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.551500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0611 02:40:17.163038       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 11 02:40:42 multinode-353000 kubelet[1960]: I0611 02:40:42.123497    1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x984g" podStartSLOduration=10.123482548 podStartE2EDuration="10.123482548s" podCreationTimestamp="2024-06-11 02:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-11 02:40:42.123353961 +0000 UTC m=+25.331300604" watchObservedRunningTime="2024-06-11 02:40:42.123482548 +0000 UTC m=+25.331429187"
	Jun 11 02:40:42 multinode-353000 kubelet[1960]: I0611 02:40:42.140000    1960 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.139985525 podStartE2EDuration="10.139985525s" podCreationTimestamp="2024-06-11 02:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-11 02:40:42.132757991 +0000 UTC m=+25.340704634" watchObservedRunningTime="2024-06-11 02:40:42.139985525 +0000 UTC m=+25.347932163"
	Jun 11 02:41:16 multinode-353000 kubelet[1960]: E0611 02:41:16.971108    1960 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:41:16 multinode-353000 kubelet[1960]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:41:16 multinode-353000 kubelet[1960]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:41:16 multinode-353000 kubelet[1960]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:41:16 multinode-353000 kubelet[1960]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:41:50 multinode-353000 kubelet[1960]: I0611 02:41:50.821231    1960 topology_manager.go:215] "Topology Admit Handler" podUID="3c820421-de3f-4771-b4c1-aac0ed316723" podNamespace="default" podName="busybox-fc5497c4f-4hdtl"
	Jun 11 02:41:50 multinode-353000 kubelet[1960]: I0611 02:41:50.960993    1960 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6pz\" (UniqueName: \"kubernetes.io/projected/3c820421-de3f-4771-b4c1-aac0ed316723-kube-api-access-wc6pz\") pod \"busybox-fc5497c4f-4hdtl\" (UID: \"3c820421-de3f-4771-b4c1-aac0ed316723\") " pod="default/busybox-fc5497c4f-4hdtl"
	Jun 11 02:41:55 multinode-353000 kubelet[1960]: E0611 02:41:55.520797    1960 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34338->127.0.0.1:38255: write tcp 127.0.0.1:34338->127.0.0.1:38255: write: broken pipe
	Jun 11 02:42:16 multinode-353000 kubelet[1960]: E0611 02:42:16.975292    1960 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:42:16 multinode-353000 kubelet[1960]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:42:16 multinode-353000 kubelet[1960]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:42:16 multinode-353000 kubelet[1960]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:42:16 multinode-353000 kubelet[1960]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:43:16 multinode-353000 kubelet[1960]: E0611 02:43:16.971093    1960 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:43:16 multinode-353000 kubelet[1960]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:43:16 multinode-353000 kubelet[1960]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:43:16 multinode-353000 kubelet[1960]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:43:16 multinode-353000 kubelet[1960]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:44:16 multinode-353000 kubelet[1960]: E0611 02:44:16.970238    1960 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:44:16 multinode-353000 kubelet[1960]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:44:16 multinode-353000 kubelet[1960]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:44:16 multinode-353000 kubelet[1960]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:44:16 multinode-353000 kubelet[1960]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-353000 -n multinode-353000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-353000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (122.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (285.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-353000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-353000
E0610 19:45:36.275023    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-353000: (24.90029691s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-353000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-353000 --wait=true -v=8 --alsologtostderr: exit status 90 (4m17.273679191s)

                                                
                                                
-- stdout --
	* [multinode-353000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-353000" primary control-plane node in "multinode-353000" cluster
	* Restarting existing hyperkit VM for "multinode-353000" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-353000-m02" worker node in "multinode-353000" cluster
	* Restarting existing hyperkit VM for "multinode-353000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.19
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:45:39.692404    9989 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:45:39.692578    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692584    9989 out.go:304] Setting ErrFile to fd 2...
	I0610 19:45:39.692587    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692759    9989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:45:39.694238    9989 out.go:298] Setting JSON to false
	I0610 19:45:39.716699    9989 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26095,"bootTime":1718047844,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:45:39.716794    9989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:45:39.738878    9989 out.go:177] * [multinode-353000] minikube v1.33.1 on Darwin 14.4.1
	I0610 19:45:39.781353    9989 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:45:39.781374    9989 notify.go:220] Checking for updates...
	I0610 19:45:39.824429    9989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:45:39.845512    9989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:45:39.866367    9989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:45:39.887316    9989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:45:39.908278    9989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:45:39.929733    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:39.929854    9989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:45:39.930309    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:39.930346    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:39.939199    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53775
	I0610 19:45:39.939566    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:39.939970    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:39.939978    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:39.940198    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:39.940315    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:39.969508    9989 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 19:45:40.011453    9989 start.go:297] selected driver: hyperkit
	I0610 19:45:40.011484    9989 start.go:901] validating driver "hyperkit" against &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.011697    9989 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:45:40.011899    9989 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.012122    9989 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 19:45:40.022075    9989 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 19:45:40.025893    9989 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.025915    9989 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 19:45:40.028541    9989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:45:40.028616    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:45:40.028625    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:45:40.028709    9989 start.go:340] cluster config:
	{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.028811    9989 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.071375    9989 out.go:177] * Starting "multinode-353000" primary control-plane node in "multinode-353000" cluster
	I0610 19:45:40.092477    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:45:40.092569    9989 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 19:45:40.092595    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:45:40.092792    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:45:40.092810    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:45:40.092980    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.093894    9989 start.go:360] acquireMachinesLock for multinode-353000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:45:40.094018    9989 start.go:364] duration metric: took 96.418µs to acquireMachinesLock for "multinode-353000"
	I0610 19:45:40.094053    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:45:40.094073    9989 fix.go:54] fixHost starting: 
	I0610 19:45:40.094498    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.094536    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:40.103465    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53777
	I0610 19:45:40.103833    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:40.104164    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:40.104180    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:40.104403    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:40.104528    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.104641    9989 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:45:40.104724    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.104851    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:45:40.105788    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.105820    9989 fix.go:112] recreateIfNeeded on multinode-353000: state=Stopped err=<nil>
	I0610 19:45:40.105834    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	W0610 19:45:40.105913    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:45:40.148276    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000" ...
	I0610 19:45:40.169332    9989 main.go:141] libmachine: (multinode-353000) Calling .Start
	I0610 19:45:40.169590    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.169632    9989 main.go:141] libmachine: (multinode-353000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid
	I0610 19:45:40.171495    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.171526    9989 main.go:141] libmachine: (multinode-353000) DBG | pid 9523 is in state "Stopped"
	I0610 19:45:40.171559    9989 main.go:141] libmachine: (multinode-353000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid...
	I0610 19:45:40.171882    9989 main.go:141] libmachine: (multinode-353000) DBG | Using UUID f0e955cd-5ea6-4315-ac08-1f17bf5837e0
	I0610 19:45:40.275926    9989 main.go:141] libmachine: (multinode-353000) DBG | Generated MAC 6e:10:a7:68:76:8c
	I0610 19:45:40.275947    9989 main.go:141] libmachine: (multinode-353000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:45:40.276073    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276103    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276164    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f0e955cd-5ea6-4315-ac08-1f17bf5837e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/1904
6-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:45:40.276203    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f0e955cd-5ea6-4315-ac08-1f17bf5837e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:45:40.276224    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:45:40.277704    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Pid is 10002
	I0610 19:45:40.278259    9989 main.go:141] libmachine: (multinode-353000) DBG | Attempt 0
	I0610 19:45:40.278270    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.278351    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 10002
	I0610 19:45:40.279973    9989 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:45:40.280067    9989 main.go:141] libmachine: (multinode-353000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:45:40.280108    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:45:40.280134    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:45:40.280161    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:45:40.280185    9989 main.go:141] libmachine: (multinode-353000) DBG | Found match: 6e:10:a7:68:76:8c
	I0610 19:45:40.280206    9989 main.go:141] libmachine: (multinode-353000) DBG | IP: 192.169.0.19
	I0610 19:45:40.280241    9989 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:45:40.280942    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:40.281154    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.281614    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:45:40.281625    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.281737    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:40.281835    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:40.281925    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282030    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282140    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:40.282302    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:40.282507    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:40.282515    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:45:40.285439    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:45:40.338413    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:45:40.339064    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.339085    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.339092    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.339099    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.721279    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:45:40.721293    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:45:40.835864    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.835901    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.835915    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.835928    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.836766    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:45:40.836785    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:45:46.073475    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:45:46.073515    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:45:46.073529    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:45:46.097300    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:45:51.340943    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:45:51.340958    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341127    9989 buildroot.go:166] provisioning hostname "multinode-353000"
	I0610 19:45:51.341138    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341240    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.341331    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.341432    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341515    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.341733    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.341882    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.341891    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000 && echo "multinode-353000" | sudo tee /etc/hostname
	I0610 19:45:51.407130    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000
	
	I0610 19:45:51.407155    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.407278    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.407374    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407468    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407561    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.407694    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.407848    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.407859    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:45:51.468420    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:45:51.468442    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:45:51.468459    9989 buildroot.go:174] setting up certificates
	I0610 19:45:51.468467    9989 provision.go:84] configureAuth start
	I0610 19:45:51.468474    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.468599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:51.468700    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.468783    9989 provision.go:143] copyHostCerts
	I0610 19:45:51.468813    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.468881    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:45:51.468890    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.469023    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:45:51.469222    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469262    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:45:51.469268    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469346    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:45:51.469495    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469543    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:45:51.469552    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469665    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:45:51.469841    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-353000]
	I0610 19:45:51.574939    9989 provision.go:177] copyRemoteCerts
	I0610 19:45:51.575027    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:45:51.575057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.575258    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.575433    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.575607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.575800    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:51.610260    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:45:51.610345    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 19:45:51.630147    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:45:51.630204    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:45:51.650528    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:45:51.650589    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:45:51.670054    9989 provision.go:87] duration metric: took 201.581041ms to configureAuth
	I0610 19:45:51.670067    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:45:51.670242    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:51.670255    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:51.670386    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.670503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.670607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670720    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670803    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.670922    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.671045    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.671053    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:45:51.726480    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:45:51.726495    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:45:51.726575    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:45:51.726593    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.726736    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.726853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.726941    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.727024    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.727157    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.727300    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.727345    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:45:51.793222    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:45:51.793246    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.793378    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.793475    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793564    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793652    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.793772    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.793927    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.793939    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:45:53.421030    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:45:53.421054    9989 machine.go:97] duration metric: took 13.139887748s to provisionDockerMachine
	I0610 19:45:53.421087    9989 start.go:293] postStartSetup for "multinode-353000" (driver="hyperkit")
	I0610 19:45:53.421100    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:45:53.421124    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.421309    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:45:53.421321    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.421404    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.421503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.421591    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.421689    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.456942    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:45:53.459812    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:45:53.459822    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:45:53.459827    9989 command_runner.go:130] > ID=buildroot
	I0610 19:45:53.459833    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:45:53.459840    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:45:53.459988    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:45:53.459999    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:45:53.460114    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:45:53.460308    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:45:53.460314    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:45:53.460524    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:45:53.467718    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:45:53.486520    9989 start.go:296] duration metric: took 65.424192ms for postStartSetup
	I0610 19:45:53.486540    9989 fix.go:56] duration metric: took 13.392941824s for fixHost
	I0610 19:45:53.486552    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.486683    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.486777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486935    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.487060    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:53.487195    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:53.487202    9989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 19:45:53.540939    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073953.908242527
	
	I0610 19:45:53.540950    9989 fix.go:216] guest clock: 1718073953.908242527
	I0610 19:45:53.540963    9989 fix.go:229] Guest: 2024-06-10 19:45:53.908242527 -0700 PDT Remote: 2024-06-10 19:45:53.486543 -0700 PDT m=+13.831437270 (delta=421.699527ms)
	I0610 19:45:53.540982    9989 fix.go:200] guest clock delta is within tolerance: 421.699527ms
	I0610 19:45:53.540986    9989 start.go:83] releasing machines lock for "multinode-353000", held for 13.447423727s
	I0610 19:45:53.541004    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541129    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:53.541236    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541536    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541646    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541706    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:45:53.541734    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541762    9989 ssh_runner.go:195] Run: cat /version.json
	I0610 19:45:53.541777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541836    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541857    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541939    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.541956    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.542057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542069    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542145    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.542159    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.621904    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:45:53.622832    9989 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 19:45:53.623012    9989 ssh_runner.go:195] Run: systemctl --version
	I0610 19:45:53.628064    9989 command_runner.go:130] > systemd 252 (252)
	I0610 19:45:53.628086    9989 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 19:45:53.628210    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:45:53.632390    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:45:53.632443    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:45:53.632487    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:45:53.644499    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:45:53.644515    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:45:53.644525    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.644620    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:53.659247    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:45:53.659535    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:45:53.668457    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:45:53.677198    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:45:53.677239    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:45:53.685876    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.694608    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:45:53.703186    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.711800    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:45:53.720598    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:45:53.729427    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:45:53.738123    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:45:53.747019    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:45:53.754733    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:45:53.754901    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:45:53.762666    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:53.871758    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:45:53.891305    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.891381    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:45:53.902978    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:45:53.903571    9989 command_runner.go:130] > [Unit]
	I0610 19:45:53.903596    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:45:53.903615    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:45:53.903621    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:45:53.903625    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:45:53.903632    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:45:53.903636    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:45:53.903639    9989 command_runner.go:130] > [Service]
	I0610 19:45:53.903642    9989 command_runner.go:130] > Type=notify
	I0610 19:45:53.903647    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:45:53.903653    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:45:53.903663    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:45:53.903670    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:45:53.903675    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:45:53.903681    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:45:53.903687    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:45:53.903693    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:45:53.903705    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:45:53.903711    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:45:53.903716    9989 command_runner.go:130] > ExecStart=
	I0610 19:45:53.903727    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:45:53.903732    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:45:53.903739    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:45:53.903744    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:45:53.903748    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:45:53.903751    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:45:53.903755    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:45:53.903763    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:45:53.903768    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:45:53.903771    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:45:53.903775    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:45:53.903780    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:45:53.903783    9989 command_runner.go:130] > Delegate=yes
	I0610 19:45:53.903788    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:45:53.903792    9989 command_runner.go:130] > KillMode=process
	I0610 19:45:53.903795    9989 command_runner.go:130] > [Install]
	I0610 19:45:53.903804    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:45:53.903867    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.918134    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:45:53.937012    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.947454    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.957667    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:45:53.978657    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.989706    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:54.004573    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:45:54.004840    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:45:54.007767    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:45:54.007939    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:45:54.015068    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:45:54.028412    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:45:54.125186    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:45:54.244241    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:45:54.244317    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:45:54.259051    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:54.351224    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:45:56.651603    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30043865s)
	I0610 19:45:56.651667    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 19:45:56.662260    9989 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 19:47:54.346370    9989 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m57.688173109s)
	I0610 19:47:54.346439    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.357366    9989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 19:47:54.453493    9989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 19:47:54.558404    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.660727    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 19:47:54.674518    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.685725    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.789246    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 19:47:54.849081    9989 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 19:47:54.849165    9989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 19:47:54.853149    9989 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 19:47:54.853161    9989 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 19:47:54.853166    9989 command_runner.go:130] > Device: 0,22	Inode: 754         Links: 1
	I0610 19:47:54.853172    9989 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 19:47:54.853177    9989 command_runner.go:130] > Access: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853185    9989 command_runner.go:130] > Modify: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853193    9989 command_runner.go:130] > Change: 2024-06-11 02:47:55.210828405 +0000
	I0610 19:47:54.853197    9989 command_runner.go:130] >  Birth: -
	I0610 19:47:54.853348    9989 start.go:562] Will wait 60s for crictl version
	I0610 19:47:54.853398    9989 ssh_runner.go:195] Run: which crictl
	I0610 19:47:54.856865    9989 command_runner.go:130] > /usr/bin/crictl
	I0610 19:47:54.856953    9989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 19:47:54.886614    9989 command_runner.go:130] > Version:  0.1.0
	I0610 19:47:54.886666    9989 command_runner.go:130] > RuntimeName:  docker
	I0610 19:47:54.886674    9989 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 19:47:54.886680    9989 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 19:47:54.887717    9989 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 19:47:54.887786    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.903316    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.904109    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.921823    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.965802    9989 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 19:47:54.965890    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:47:54.966288    9989 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0610 19:47:54.971034    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:54.981371    9989 kubeadm.go:877] updating cluster {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 19:47:54.981452    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:47:54.981509    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:54.993718    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:54.993732    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:54.993737    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:54.993741    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:54.993744    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:54.993748    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:54.993753    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:54.993756    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:54.993761    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:54.993765    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:54.994255    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:54.994266    9989 docker.go:615] Images already preloaded, skipping extraction
	I0610 19:47:54.994336    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:55.006339    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:55.006352    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:55.006356    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:55.006360    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:55.006363    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:55.006379    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:55.006385    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:55.006390    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:55.006394    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:55.006398    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:55.006906    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:55.006921    9989 cache_images.go:84] Images are preloaded, skipping loading
	I0610 19:47:55.006932    9989 kubeadm.go:928] updating node { 192.169.0.19 8443 v1.30.1 docker true true} ...
	I0610 19:47:55.007008    9989 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 19:47:55.007079    9989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 19:47:55.025485    9989 command_runner.go:130] > cgroupfs
	I0610 19:47:55.026122    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:47:55.026131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:47:55.026139    9989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 19:47:55.026158    9989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353000 NodeName:multinode-353000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 19:47:55.026249    9989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-353000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 19:47:55.026311    9989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 19:47:55.034754    9989 command_runner.go:130] > kubeadm
	I0610 19:47:55.034764    9989 command_runner.go:130] > kubectl
	I0610 19:47:55.034767    9989 command_runner.go:130] > kubelet
	I0610 19:47:55.034842    9989 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 19:47:55.034886    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 19:47:55.042800    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0610 19:47:55.056385    9989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 19:47:55.069690    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0610 19:47:55.083214    9989 ssh_runner.go:195] Run: grep 192.169.0.19	control-plane.minikube.internal$ /etc/hosts
	I0610 19:47:55.086096    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:55.096237    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:55.195683    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:47:55.209046    9989 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000 for IP: 192.169.0.19
	I0610 19:47:55.209070    9989 certs.go:194] generating shared ca certs ...
	I0610 19:47:55.209087    9989 certs.go:226] acquiring lock for ca certs: {Name:mkb8782270d93d160af8329e99f7f211e7b6b737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.209270    9989 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key
	I0610 19:47:55.209345    9989 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key
	I0610 19:47:55.209355    9989 certs.go:256] generating profile certs ...
	I0610 19:47:55.209458    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key
	I0610 19:47:55.209537    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6
	I0610 19:47:55.209630    9989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key
	I0610 19:47:55.209637    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 19:47:55.209659    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 19:47:55.209677    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 19:47:55.209695    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 19:47:55.209716    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 19:47:55.209746    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 19:47:55.209778    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 19:47:55.209796    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 19:47:55.209888    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem (1338 bytes)
	W0610 19:47:55.209936    9989 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485_empty.pem, impossibly tiny 0 bytes
	I0610 19:47:55.209945    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 19:47:55.209987    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem (1082 bytes)
	I0610 19:47:55.210029    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem (1123 bytes)
	I0610 19:47:55.210067    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem (1679 bytes)
	I0610 19:47:55.210150    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:47:55.210197    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem -> /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.210218    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.210236    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.210677    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 19:47:55.243710    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 19:47:55.274291    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 19:47:55.304150    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 19:47:55.327241    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 19:47:55.347168    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 19:47:55.366973    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 19:47:55.386745    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 19:47:55.406837    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem --> /usr/share/ca-certificates/6485.pem (1338 bytes)
	I0610 19:47:55.426587    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /usr/share/ca-certificates/64852.pem (1708 bytes)
	I0610 19:47:55.446314    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 19:47:55.466320    9989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 19:47:55.480094    9989 ssh_runner.go:195] Run: openssl version
	I0610 19:47:55.484173    9989 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 19:47:55.484381    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6485.pem && ln -fs /usr/share/ca-certificates/6485.pem /etc/ssl/certs/6485.pem"
	I0610 19:47:55.492857    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496253    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496359    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496397    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.500429    9989 command_runner.go:130] > 51391683
	I0610 19:47:55.500562    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6485.pem /etc/ssl/certs/51391683.0"
	I0610 19:47:55.508913    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64852.pem && ln -fs /usr/share/ca-certificates/64852.pem /etc/ssl/certs/64852.pem"
	I0610 19:47:55.517404    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520837    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520969    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.521015    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.525079    9989 command_runner.go:130] > 3ec20f2e
	I0610 19:47:55.525226    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 19:47:55.533665    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 19:47:55.542055    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545479    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545578    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545613    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.549597    9989 command_runner.go:130] > b5213941
	I0610 19:47:55.549850    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 19:47:55.558357    9989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561717    9989 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561732    9989 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 19:47:55.561740    9989 command_runner.go:130] > Device: 253,1	Inode: 8384328     Links: 1
	I0610 19:47:55.561749    9989 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:47:55.561758    9989 command_runner.go:130] > Access: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561763    9989 command_runner.go:130] > Modify: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561770    9989 command_runner.go:130] > Change: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561776    9989 command_runner.go:130] >  Birth: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561913    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 19:47:55.566014    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.566161    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 19:47:55.570209    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.570381    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 19:47:55.574601    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.574837    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 19:47:55.578866    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.579032    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 19:47:55.583114    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.583281    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 19:47:55.587426    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.587558    9989 kubeadm.go:391] StartCluster: {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:47:55.587674    9989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.599645    9989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 19:47:55.607448    9989 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 19:47:55.607459    9989 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 19:47:55.607466    9989 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 19:47:55.607470    9989 command_runner.go:130] > member
	W0610 19:47:55.607549    9989 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 19:47:55.607559    9989 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 19:47:55.607568    9989 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 19:47:55.607620    9989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 19:47:55.615074    9989 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:47:55.615382    9989 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-353000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.615468    9989 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-5942/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-353000" cluster setting kubeconfig missing "multinode-353000" context setting]
	I0610 19:47:55.615649    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.616397    9989 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.616577    9989 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x89f9600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:47:55.616926    9989 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 19:47:55.617061    9989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 19:47:55.624482    9989 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.19
	I0610 19:47:55.624500    9989 kubeadm.go:1154] stopping kube-system containers ...
	I0610 19:47:55.624549    9989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.638294    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.638306    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.638309    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.638314    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.638319    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.638322    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.638326    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.638329    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.638332    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.638345    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.638349    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.638352    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.638355    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.638358    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.638362    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.638365    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.638951    9989 docker.go:483] Stopping containers: [deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a]
	I0610 19:47:55.639021    9989 ssh_runner.go:195] Run: docker stop deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a
	I0610 19:47:55.653484    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.653495    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.653500    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.653503    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.653506    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.653624    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.653629    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.653632    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.653791    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.653797    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.653800    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.653803    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.653806    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.653844    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.653850    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.653853    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.654638    9989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 19:47:55.667514    9989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 19:47:55.674892    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 19:47:55.674904    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 19:47:55.674910    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 19:47:55.674930    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674992    9989 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674999    9989 kubeadm.go:156] found existing configuration files:
	
	I0610 19:47:55.675040    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 19:47:55.682287    9989 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682303    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682341    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 19:47:55.689835    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 19:47:55.696884    9989 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696902    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696953    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 19:47:55.704404    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.711485    9989 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711508    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711548    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.718937    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 19:47:55.726127    9989 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726146    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726181    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 19:47:55.733619    9989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 19:47:55.741255    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.804058    9989 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 19:47:55.804120    9989 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 19:47:55.804305    9989 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 19:47:55.804483    9989 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 19:47:55.804689    9989 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 19:47:55.804862    9989 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 19:47:55.805120    9989 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 19:47:55.805265    9989 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 19:47:55.805411    9989 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 19:47:55.805605    9989 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 19:47:55.805743    9989 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 19:47:55.806676    9989 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 19:47:55.806774    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.845988    9989 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 19:47:55.886933    9989 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 19:47:56.013943    9989 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 19:47:56.065755    9989 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 19:47:56.199902    9989 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 19:47:56.356026    9989 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 19:47:56.358145    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.407409    9989 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:47:56.408002    9989 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:47:56.408066    9989 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 19:47:56.513337    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.563955    9989 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 19:47:56.563969    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 19:47:56.570350    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 19:47:56.570701    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 19:47:56.571965    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.651317    9989 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 19:47:56.653781    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:47:56.653842    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.154036    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.654114    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.666427    9989 command_runner.go:130] > 1536
	I0610 19:47:57.666488    9989 api_server.go:72] duration metric: took 1.012757588s to wait for apiserver process to appear ...
	I0610 19:47:57.666498    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:47:57.666515    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.438002    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.438019    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.438029    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.455738    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.455759    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.667766    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.672313    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:47:59.672324    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.166779    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.171966    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:48:00.171979    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.666724    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.671558    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:00.671622    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:00.671627    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:00.671635    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:00.671638    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:00.683001    9989 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 19:48:00.683015    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:00.683020    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:00.683023    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:00.683026    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:00.683029    9989 round_trippers.go:580]     Audit-Id: 09da700d-8425-4926-9374-2d6528bd7bb9
	I0610 19:48:00.683033    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:00.683035    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:00.683038    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:00.683058    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:00.683109    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:00.683119    9989 api_server.go:131] duration metric: took 3.016721791s to wait for apiserver health ...
	I0610 19:48:00.683126    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:48:00.683131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:48:00.722329    9989 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 19:48:00.744311    9989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 19:48:00.748261    9989 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 19:48:00.748273    9989 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 19:48:00.748278    9989 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 19:48:00.748283    9989 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:48:00.748290    9989 command_runner.go:130] > Access: 2024-06-11 02:45:50.361198634 +0000
	I0610 19:48:00.748295    9989 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 19:48:00.748300    9989 command_runner.go:130] > Change: 2024-06-11 02:45:47.690352312 +0000
	I0610 19:48:00.748303    9989 command_runner.go:130] >  Birth: -
	I0610 19:48:00.748470    9989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 19:48:00.748478    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 19:48:00.778024    9989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 19:48:01.117060    9989 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.147629    9989 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.301672    9989 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 19:48:01.356197    9989 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 19:48:01.357762    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:01.357819    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.357825    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.357831    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.357834    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.361084    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:01.361095    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.361101    9989 round_trippers.go:580]     Audit-Id: 0a68b78a-1971-4606-9c89-6dd28309d599
	I0610 19:48:01.361107    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.361112    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.361115    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.361118    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.361121    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.362367    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.365313    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:01.365340    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 19:48:01.365347    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 19:48:01.365352    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:01.365356    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 19:48:01.365362    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:01.365367    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 19:48:01.365371    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 19:48:01.365374    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:01.365377    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:01.365381    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0610 19:48:01.365385    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 19:48:01.365390    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0610 19:48:01.365395    9989 system_pods.go:74] duration metric: took 7.626153ms to wait for pod list to return data ...
	I0610 19:48:01.365403    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:01.365440    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:01.365444    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.365450    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.365454    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.367622    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.367635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.367640    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.367653    9989 round_trippers.go:580]     Audit-Id: 9ef6ecc8-1407-4850-b836-c92476875d2b
	I0610 19:48:01.367661    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.367666    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.367671    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.367674    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.367975    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15572 chars]
	I0610 19:48:01.368527    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368541    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368549    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368552    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368556    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368559    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368563    9989 node_conditions.go:105] duration metric: took 3.15591ms to run NodePressure ...
	I0610 19:48:01.368573    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:48:01.551683    9989 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 19:48:01.669147    9989 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 19:48:01.670157    9989 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 19:48:01.670212    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0610 19:48:01.670218    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.670224    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.670227    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.674624    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:01.674636    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.674641    9989 round_trippers.go:580]     Audit-Id: c47f63c6-e6e7-4d8d-b049-a6e6efe1f028
	I0610 19:48:01.674644    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.674650    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.674654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.674656    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.674659    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.675233    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0610 19:48:01.675943    9989 kubeadm.go:733] kubelet initialised
	I0610 19:48:01.675953    9989 kubeadm.go:734] duration metric: took 5.786634ms waiting for restarted kubelet to initialise ...
	I0610 19:48:01.675959    9989 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:01.676001    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.676006    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.676012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.676015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.678521    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.678536    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.678546    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.678551    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.678555    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.678558    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.678562    9989 round_trippers.go:580]     Audit-Id: 695aab2d-7185-4ab8-93db-4232865056b6
	I0610 19:48:01.678564    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.679581    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"916"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.681433    9989 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.681482    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:01.681487    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.681493    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.681497    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.683281    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.683286    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.683290    9989 round_trippers.go:580]     Audit-Id: ebbbfe81-a38f-4a3c-8e5c-90703473f744
	I0610 19:48:01.683293    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.683296    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.683308    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.683313    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.683316    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.683580    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:01.683874    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.683881    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.683887    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.683891    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686546    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.686555    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.686561    9989 round_trippers.go:580]     Audit-Id: 2892fe1d-d0a8-4261-8bf0-3133e5e2a446
	I0610 19:48:01.686565    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.686568    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.686571    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.686575    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.686578    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.686656    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.686844    9989 pod_ready.go:97] node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686854    9989 pod_ready.go:81] duration metric: took 5.411979ms for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.686861    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686867    9989 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.686904    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:01.686909    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.686915    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686918    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.688977    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.688986    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.688991    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.688996    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.689002    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.689007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.689011    9989 round_trippers.go:580]     Audit-Id: 3ace8889-aedb-4a19-9411-27b71b8a2e0b
	I0610 19:48:01.689015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.689291    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:01.689535    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.689542    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.689547    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.689550    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.690829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.690836    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.690841    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.690845    9989 round_trippers.go:580]     Audit-Id: 2f32a662-31a6-4053-8a84-be837537cd4c
	I0610 19:48:01.690848    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.690851    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.690855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.690858    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.691071    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.691242    9989 pod_ready.go:97] node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691252    9989 pod_ready.go:81] duration metric: took 4.380161ms for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.691258    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691269    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.691301    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:01.691306    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.691311    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.691315    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.692447    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.692457    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.692462    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.692466    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.692469    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.692471    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.692474    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.692476    9989 round_trippers.go:580]     Audit-Id: bad7c45b-bf08-4758-a569-97c3dc9eafb6
	I0610 19:48:01.692666    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"893","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0610 19:48:01.692920    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.692926    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.692932    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.692936    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.694073    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.694081    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.694086    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.694089    9989 round_trippers.go:580]     Audit-Id: 98fa13c5-25d7-4e14-b2a2-7560361baffd
	I0610 19:48:01.694092    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.694095    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.694098    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.694100    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.694341    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.694500    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694509    9989 pod_ready.go:81] duration metric: took 3.23437ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.694514    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694519    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.694545    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:01.694549    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.694555    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.694559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.695753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.695761    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.695766    9989 round_trippers.go:580]     Audit-Id: a7d05f7f-1539-4d5f-9fe3-3695667a8deb
	I0610 19:48:01.695770    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.695772    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.695775    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.695777    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.695780    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.695988    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"895","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0610 19:48:01.757966    9989 request.go:629] Waited for 61.697059ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758041    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758048    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.758053    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.758057    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.759756    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.759766    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.759773    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.759779    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.759783    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.759788    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.759793    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.759806    9989 round_trippers.go:580]     Audit-Id: e8ae6de5-f7c9-4f36-881c-ed09a8012b60
	I0610 19:48:01.759959    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.760178    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760188    9989 pod_ready.go:81] duration metric: took 65.665915ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.760194    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760200    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.959909    9989 request.go:629] Waited for 199.659235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960065    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960075    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.960086    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.960093    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.962763    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.962778    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.962785    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.962789    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.962793    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.962819    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.962827    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.962832    9989 round_trippers.go:580]     Audit-Id: e27af578-4ca0-4cfe-8af3-b60f6b0fa9bd
	I0610 19:48:01.962941    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:02.158260    9989 request.go:629] Waited for 194.998097ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158342    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.158363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.158369    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.160892    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.160907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.160913    9989 round_trippers.go:580]     Audit-Id: 0bef1bb4-379d-409d-8e02-4dbc9a2811a4
	I0610 19:48:02.160918    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.160949    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.160957    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.160961    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.160968    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.161074    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"750","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:02.161324    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161336    9989 pod_ready.go:81] duration metric: took 401.144458ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.161344    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161351    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.358390    9989 request.go:629] Waited for 196.956176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358484    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358496    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.358508    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.358515    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.360992    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.361021    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.361031    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.361036    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.361039    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.361043    9989 round_trippers.go:580]     Audit-Id: 6f8be12b-1957-417b-8d1b-e678c7792dd3
	I0610 19:48:02.361046    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.361051    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.361202    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:02.557934    9989 request.go:629] Waited for 196.31847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.557999    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.558009    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.558037    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.558044    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.560427    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.560441    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.560448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.560454    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.560458    9989 round_trippers.go:580]     Audit-Id: 4c41615e-621c-4a97-9365-ac7c1773c395
	I0610 19:48:02.560461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.560465    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.560468    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.560523    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:02.560758    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:02.560768    9989 pod_ready.go:81] duration metric: took 399.425236ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.560777    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.757957    9989 request.go:629] Waited for 197.131938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758066    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758078    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.758089    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.758095    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.761202    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:02.761216    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.761223    9989 round_trippers.go:580]     Audit-Id: b73d177c-0cc8-4b3e-9eaa-58e1aca589bd
	I0610 19:48:02.761229    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.761233    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.761236    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.761240    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.761243    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.761619    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:02.958192    9989 request.go:629] Waited for 196.273854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958328    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.958357    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.958367    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.961275    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.961290    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.961297    9989 round_trippers.go:580]     Audit-Id: 55ebfcfe-9c2e-43ee-8757-62fb6711bcdf
	I0610 19:48:02.961302    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.961312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.961315    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.961320    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.961324    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.961498    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:02.961759    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961777    9989 pod_ready.go:81] duration metric: took 401.008697ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.961786    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961792    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:03.158219    9989 request.go:629] Waited for 196.363249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158365    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158377    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.158388    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.158394    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.160987    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.161000    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.161007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.161011    9989 round_trippers.go:580]     Audit-Id: 4b2e7508-8f47-4d7f-b4ea-f0310bd3d491
	I0610 19:48:03.161015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.161019    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.161023    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.161027    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.161126    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"897","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0610 19:48:03.359868    9989 request.go:629] Waited for 198.409302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.359998    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.360008    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.360020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.360027    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.362871    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.362892    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.362899    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.362904    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.362908    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.362916    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.362921    9989 round_trippers.go:580]     Audit-Id: ba3a2e04-447a-4800-872e-bbbc8698c7f3
	I0610 19:48:03.362931    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.363233    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:03.363483    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363503    9989 pod_ready.go:81] duration metric: took 401.718227ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:03.363511    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363517    9989 pod_ready.go:38] duration metric: took 1.687604899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:03.363529    9989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 19:48:03.375111    9989 command_runner.go:130] > -16
	I0610 19:48:03.375245    9989 ops.go:34] apiserver oom_adj: -16
	I0610 19:48:03.375257    9989 kubeadm.go:591] duration metric: took 7.76794986s to restartPrimaryControlPlane
	I0610 19:48:03.375262    9989 kubeadm.go:393] duration metric: took 7.787982406s to StartCluster
	I0610 19:48:03.375275    9989 settings.go:142] acquiring lock: {Name:mkfdfd0a396b1866366b70895e6d936c4f7de68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.375367    9989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:48:03.375765    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.376028    9989 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 19:48:03.400444    9989 out.go:177] * Verifying Kubernetes components...
	I0610 19:48:03.376041    9989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 19:48:03.376184    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:03.421565    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:03.463087    9989 out.go:177] * Enabled addons: 
	I0610 19:48:03.484252    9989 addons.go:510] duration metric: took 108.208716ms for enable addons: enabled=[]
	I0610 19:48:03.563649    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:48:03.576041    9989 node_ready.go:35] waiting up to 6m0s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:03.576103    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.576110    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.576116    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.576120    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.577625    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:03.577635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.577640    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.577644    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.577652    9989 round_trippers.go:580]     Audit-Id: 1a9b118d-1c1f-4a85-b573-ec6d65f2ea3e
	I0610 19:48:03.577656    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.577658    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.577661    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.577737    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.077472    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.077497    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.077513    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.077519    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.080273    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.080289    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.080298    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.080305    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.080311    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.080315    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.080320    9989 round_trippers.go:580]     Audit-Id: 1859e085-211f-4e27-92e7-f3b22958dff9
	I0610 19:48:04.080323    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.080687    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.577072    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.577095    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.577107    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.577115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.579474    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.579488    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.579496    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.579500    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.579505    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.579508    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.579511    9989 round_trippers.go:580]     Audit-Id: d35268d8-5a6a-4b80-9fc5-c56ab0f588fa
	I0610 19:48:04.579516    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.579860    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:05.077214    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.077238    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.077249    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.077255    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.079762    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.079777    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.079784    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.079788    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.079791    9989 round_trippers.go:580]     Audit-Id: 8db0d71b-506a-485d-b9c4-877536f220a0
	I0610 19:48:05.079795    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.079820    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.079828    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.079940    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.080178    9989 node_ready.go:49] node "multinode-353000" has status "Ready":"True"
	I0610 19:48:05.080194    9989 node_ready.go:38] duration metric: took 1.504185458s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:05.080202    9989 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:05.080250    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:05.080258    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.080265    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.080270    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.082809    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.082818    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.082823    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.082827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.082831    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.082834    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.082836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.082839    9989 round_trippers.go:580]     Audit-Id: ddb615f3-2587-4f9c-8d81-31db61bb1a6e
	I0610 19:48:05.083922    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0610 19:48:05.085829    9989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:05.085871    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.085875    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.085881    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.085896    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.086914    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.086929    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.086937    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.086941    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.086944    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.086947    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.086957    9989 round_trippers.go:580]     Audit-Id: b4ad06e6-d502-42ac-9675-7f15e25621df
	I0610 19:48:05.086961    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.087093    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.087343    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.087350    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.087355    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.087359    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.088202    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:05.088209    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.088215    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.088221    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.088226    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.088231    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.088236    9989 round_trippers.go:580]     Audit-Id: b6058267-b32d-4d28-9209-3e3c65514ada
	I0610 19:48:05.088239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.088425    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.586718    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.586742    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.586754    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.586759    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.589614    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.589627    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.589634    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.589639    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.589643    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.589648    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.589653    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.589657    9989 round_trippers.go:580]     Audit-Id: a2558bb6-21de-413e-adb7-2066705c0c39
	I0610 19:48:05.589740    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.590099    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.590114    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.590121    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.590127    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.591639    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.591647    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.591654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.591672    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.591679    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.591683    9989 round_trippers.go:580]     Audit-Id: 2de87cae-73ae-440c-a6d4-90fb3f51f475
	I0610 19:48:05.591688    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.591709    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.591808    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.086573    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.086600    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.086612    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.086618    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.089412    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.089427    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.089434    9989 round_trippers.go:580]     Audit-Id: f7e13af5-b1a6-43d3-bb98-5aad49fca036
	I0610 19:48:06.089438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.089441    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.089446    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.089450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.089589    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.089977    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.089987    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.089994    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.089998    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.091344    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.091353    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.091358    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.091361    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.091364    9989 round_trippers.go:580]     Audit-Id: 7a289ac0-a7eb-4e17-a539-34afa9d10e8f
	I0610 19:48:06.091367    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.091370    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.091372    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.091556    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.587106    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.587131    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.587143    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.587148    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.589792    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.589811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.589818    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.589822    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.589835    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.589840    9989 round_trippers.go:580]     Audit-Id: 1ec66f4a-3740-4406-bbd1-e5ca56116de6
	I0610 19:48:06.589843    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.589847    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.590009    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.590408    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.590419    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.590425    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.590431    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.591734    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.591742    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.591746    9989 round_trippers.go:580]     Audit-Id: 3ed956f5-c213-4c78-a89b-9a399e0d9f57
	I0610 19:48:06.591749    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.591752    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.591755    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.591758    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.591760    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.591853    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.086755    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.086817    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.086833    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.086840    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.089422    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.089436    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.089444    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.089448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.089456    9989 round_trippers.go:580]     Audit-Id: 3c2b2755-0928-4843-907f-76f6698cb531
	I0610 19:48:07.089461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.089464    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.089848    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.090239    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.090248    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.090257    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.090263    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.091435    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.091442    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.091447    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.091461    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.091466    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.091469    9989 round_trippers.go:580]     Audit-Id: a295b9b4-766e-4157-bafe-85b97af1b24f
	I0610 19:48:07.091473    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.091477    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.091632    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.091819    9989 pod_ready.go:102] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:07.586768    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.586789    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.586801    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.586811    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.589483    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.589501    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.589508    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.589513    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.589518    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.589523    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.589529    9989 round_trippers.go:580]     Audit-Id: 8f011804-7b53-46a0-8762-c6021b6b797c
	I0610 19:48:07.589533    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.589733    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.590139    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.590149    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.590157    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.590161    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.591411    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.591423    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.591431    9989 round_trippers.go:580]     Audit-Id: 32d80ac7-569b-4efe-b59c-6c43cc45cbb0
	I0610 19:48:07.591438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.591442    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.591450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.591455    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.591459    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.591711    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.085955    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.085978    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.085989    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.085995    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.088888    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:08.088905    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.088913    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.088917    9989 round_trippers.go:580]     Audit-Id: 6130cd3b-545c-4dab-bb4e-8509f6ca7583
	I0610 19:48:08.088921    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.088924    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.088929    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.088943    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.089331    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.089733    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.089743    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.089751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.089757    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.091163    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.091171    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.091176    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.091178    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.091181    9989 round_trippers.go:580]     Audit-Id: fb4feb18-1294-4799-b740-01b7c906b714
	I0610 19:48:08.091183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.091187    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.091191    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.091368    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.586116    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.586130    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.586136    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.586139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.588086    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.588098    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.588103    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.588106    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.588108    9989 round_trippers.go:580]     Audit-Id: 0ee2c29d-3bee-4ce6-b7f8-9c58b599b3c3
	I0610 19:48:08.588111    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.588114    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.588116    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.588226    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.588519    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.588525    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.588531    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.588534    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.593668    9989 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 19:48:08.593684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.593689    9989 round_trippers.go:580]     Audit-Id: ad0e5c68-e6f8-4266-8198-de1fd97d7f9b
	I0610 19:48:08.593692    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.593694    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.593696    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.593699    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.593702    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.593773    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.086588    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:09.086618    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.086658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.086666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.089146    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.089159    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.089199    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.089213    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.089220    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.089227    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.089232    9989 round_trippers.go:580]     Audit-Id: f98d64ed-8706-40c8-bca0-af200ff708e8
	I0610 19:48:09.089239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.089496    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0610 19:48:09.089821    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.089828    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.089834    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.089837    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.090901    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.090910    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.090914    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.090918    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.090922    9989 round_trippers.go:580]     Audit-Id: 684d3cb2-4de8-4213-801b-a1b1cdca1ae6
	I0610 19:48:09.090926    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.090929    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.090932    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.091098    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.091288    9989 pod_ready.go:92] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:09.091297    9989 pod_ready.go:81] duration metric: took 4.005597593s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091304    9989 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.091336    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.091342    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.091345    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.092345    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.092354    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.092359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.092364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.092368    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.092372    9989 round_trippers.go:580]     Audit-Id: 0ec593cf-ab0e-4393-b1d5-d458992d576c
	I0610 19:48:09.092378    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.092386    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.092510    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.092739    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.092746    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.092751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.092754    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.093693    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.093703    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.093710    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.093716    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.093720    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.093723    9989 round_trippers.go:580]     Audit-Id: cd7754ad-de2e-4337-95c9-5f8181bafe8a
	I0610 19:48:09.093726    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.093736    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.093852    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.591562    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.591592    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.591601    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.591606    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.593937    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.593942    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.593946    9989 round_trippers.go:580]     Audit-Id: a1e77184-60e5-45b7-991d-afda7283198c
	I0610 19:48:09.593949    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.593953    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.593955    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.593958    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.594184    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.594428    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.594435    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.594441    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.594444    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.595688    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.595698    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.595705    9989 round_trippers.go:580]     Audit-Id: b8c9b2c8-7992-42e7-9bf8-112b13ef8d15
	I0610 19:48:09.595711    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.595721    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.595729    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.595732    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.595734    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.595855    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.091896    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.091930    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.091948    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.091961    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.094812    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:10.094827    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.094833    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.094838    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.094842    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.094847    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.094850    9989 round_trippers.go:580]     Audit-Id: 36d914ed-5a76-4cfd-aea2-50d2467afc00
	I0610 19:48:10.094854    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.095220    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.095550    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.095559    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.095567    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.095572    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.097001    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.097008    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.097012    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.097016    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.097018    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.097021    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.097031    9989 round_trippers.go:580]     Audit-Id: 69d6521e-fa5d-4f41-a0e6-1742e53a772b
	I0610 19:48:10.097034    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.097219    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.592589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.592613    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.592625    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.592631    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.595848    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:10.595860    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.595867    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.595872    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.595876    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.595881    9989 round_trippers.go:580]     Audit-Id: 11308bab-1148-4a9a-9a2f-6d24ea1297c6
	I0610 19:48:10.595886    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.595890    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.595995    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.596332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.596342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.596350    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.596372    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.597763    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.597770    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.597776    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.597781    9989 round_trippers.go:580]     Audit-Id: 04f99b83-61e5-4bf2-8781-a0e87f56f205
	I0610 19:48:10.597786    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.597791    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.597794    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.597796    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.597950    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.092146    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.092175    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.092188    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.092244    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.094833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.094848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.094855    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.094859    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.094864    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.094869    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.094873    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.094877    9989 round_trippers.go:580]     Audit-Id: f1b5bd76-11e8-4009-a1d4-09ae141a7be4
	I0610 19:48:11.095063    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.095396    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.095405    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.095414    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.095420    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.096829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.096837    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.096842    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.096845    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.096848    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.096851    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.096855    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.096857    9989 round_trippers.go:580]     Audit-Id: 5edc3937-e4f9-4fc8-924f-f2f08684b9af
	I0610 19:48:11.097460    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.097661    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:11.592045    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.592069    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.592139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.592150    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.594256    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.594268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.594276    9989 round_trippers.go:580]     Audit-Id: 22199be0-8b40-4afe-8222-00876ce24849
	I0610 19:48:11.594280    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.594284    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.594289    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.594292    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.594295    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.594751    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.595057    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.595064    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.595069    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.595073    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.596263    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.596270    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.596275    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.596277    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.596280    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.596282    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.596285    9989 round_trippers.go:580]     Audit-Id: 1e950ce6-6a1d-4fb4-862e-369bdd1c1b97
	I0610 19:48:11.596287    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.596438    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.091946    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.092024    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.092038    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.092047    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.094382    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:12.094392    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.094398    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.094402    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.094410    9989 round_trippers.go:580]     Audit-Id: fae3296c-1bb4-48d8-bb8a-365ebcc14279
	I0610 19:48:12.094421    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.094424    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.094428    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.094726    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.095092    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.095102    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.095110    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.095115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.096329    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.096337    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.096342    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.096346    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.096350    9989 round_trippers.go:580]     Audit-Id: 3613c759-c38d-4132-b7db-3ebfd2715c11
	I0610 19:48:12.096352    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.096355    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.096357    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.096531    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.591302    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.591317    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.591323    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.591326    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.592512    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.592525    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.592532    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.592537    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.592541    9989 round_trippers.go:580]     Audit-Id: b9eb3c47-6f8d-4edb-a70c-efdabd5c9569
	I0610 19:48:12.592545    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.592550    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.592554    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.592679    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.592922    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.592929    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.592935    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.592939    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.594275    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.594281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.594287    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.594291    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.594299    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.594306    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.594315    9989 round_trippers.go:580]     Audit-Id: 3687110f-6d7b-4d3c-a20f-dbbdac34123e
	I0610 19:48:12.594320    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.594536    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.092944    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.092964    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.092975    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.092980    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.094898    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.094907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.094913    9989 round_trippers.go:580]     Audit-Id: 4746a862-34ed-4f9d-86e0-fe54a5c8b1f0
	I0610 19:48:13.094916    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.094920    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.094923    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.094926    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.094929    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.095280    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.095536    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.095548    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.095554    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.095559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.096553    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:13.096561    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.096567    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.096571    9989 round_trippers.go:580]     Audit-Id: 72e59267-4587-49b7-acec-8760fef789ba
	I0610 19:48:13.096574    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.096579    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.096583    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.096586    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.096715    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.591444    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.591547    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.591562    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.591569    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:13.593942    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.593954    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.593964    9989 round_trippers.go:580]     Audit-Id: 4cb26672-7251-47d6-9956-9bd290658ddd
	I0610 19:48:13.593972    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.593977    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.593982    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.593989    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.594310    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.594645    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.594658    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.594666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.594673    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.596261    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.596268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.596273    9989 round_trippers.go:580]     Audit-Id: 67e85776-8134-4d60-b04e-6745575e0722
	I0610 19:48:13.596276    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.596280    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.596282    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.596286    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.596288    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.596582    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.596755    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:14.091643    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.091719    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.091733    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.091741    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.094245    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.094280    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.094290    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.094312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.094319    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.094323    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.094329    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.094332    9989 round_trippers.go:580]     Audit-Id: 950f168e-9ccc-4272-accd-6013766a76ca
	I0610 19:48:14.094657    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.094995    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.095005    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.095012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.095015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.096236    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.096244    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.096250    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.096256    9989 round_trippers.go:580]     Audit-Id: eebd34cd-fcec-4d30-b2c0-a119875e2dbd
	I0610 19:48:14.096260    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.096265    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.096267    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.096270    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.096411    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:14.592108    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.592139    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.592184    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.592191    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.594672    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.594684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.594691    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.594694    9989 round_trippers.go:580]     Audit-Id: 8d594bf6-b784-4c8a-aec0-2be7690404dc
	I0610 19:48:14.594698    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.594701    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.594705    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.594709    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.595294    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.595634    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.595643    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.595658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.595665    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.596893    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.596900    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.596905    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.596917    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.596921    9989 round_trippers.go:580]     Audit-Id: 3dd28d6f-84f3-46df-9566-43f2d793ebd5
	I0610 19:48:14.596923    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.596927    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.596930    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.597086    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.091684    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.091716    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.091756    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.091765    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.094212    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.094225    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.094232    9989 round_trippers.go:580]     Audit-Id: e13d9f6e-c973-4ff8-873c-d7b8c4b8f56d
	I0610 19:48:15.094237    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.094242    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.094248    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.094252    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.094257    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.094341    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:15.094659    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.094668    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.094675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.094680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.096045    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.096057    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.096064    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.096085    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.096094    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.096100    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.096105    9989 round_trippers.go:580]     Audit-Id: 296d945a-df5f-46db-a534-d725c2470a49
	I0610 19:48:15.096109    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.096301    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.592832    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.592857    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.592866    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.592872    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.595717    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.595735    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.595746    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.595754    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.595772    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.595779    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.595786    9989 round_trippers.go:580]     Audit-Id: ae59896b-cf44-4f51-a715-f1122fd8af04
	I0610 19:48:15.595790    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.596233    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"958","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0610 19:48:15.596566    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.596576    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.596583    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.596597    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.597753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.597760    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.597765    9989 round_trippers.go:580]     Audit-Id: b0d6cb8a-03a6-44b3-a2ba-bbdc0b1bb2cd
	I0610 19:48:15.597769    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.597774    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.597778    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.597781    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.597783    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.597942    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.598119    9989 pod_ready.go:92] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.598127    9989 pod_ready.go:81] duration metric: took 6.507043423s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598142    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598180    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:15.598184    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.598190    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.598194    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.599330    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.599339    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.599344    9989 round_trippers.go:580]     Audit-Id: 9ee40abb-4038-4697-bf98-1a8c08e3e5e7
	I0610 19:48:15.599355    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.599369    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.599374    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.599378    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.599383    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.599946    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"954","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0610 19:48:15.600736    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.600744    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.600750    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.600755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.602146    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.602154    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.602161    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.602166    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.602170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.602172    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.602175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.602177    9989 round_trippers.go:580]     Audit-Id: c8e6ccc9-5c26-4e00-8c74-5394763932f0
	I0610 19:48:15.602374    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.602545    9989 pod_ready.go:92] pod "kube-apiserver-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.602554    9989 pod_ready.go:81] duration metric: took 4.406297ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602560    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:15.602593    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.602599    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.602603    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.603793    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.603799    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.603805    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.603809    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.603813    9989 round_trippers.go:580]     Audit-Id: 06801598-bd08-4f01-b582-51da8e9dc299
	I0610 19:48:15.603815    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.603817    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.603820    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.604059    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"956","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0610 19:48:15.604286    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.604293    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.604298    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.604303    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605338    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.605345    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.605350    9989 round_trippers.go:580]     Audit-Id: ef3b568d-cb90-461e-91e7-4aa6b5568300
	I0610 19:48:15.605353    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.605357    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.605360    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.605364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.605373    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.605538    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.605703    9989 pod_ready.go:92] pod "kube-controller-manager-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.605711    9989 pod_ready.go:81] duration metric: took 3.145898ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605717    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605744    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:15.605749    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.605755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605759    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.606810    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.606817    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.606822    9989 round_trippers.go:580]     Audit-Id: 9e88e041-c1ec-4328-a34c-7b5e2396785a
	I0610 19:48:15.606825    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.606827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.606830    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.606833    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.606836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.607062    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:15.607284    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:15.607291    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.607297    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.607301    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.608273    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.608281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.608288    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.608294    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.608298    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.608301    9989 round_trippers.go:580]     Audit-Id: 9b407b86-eb01-4135-9dfb-f26b1633b27a
	I0610 19:48:15.608303    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.608306    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.608468    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"949","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:15.608621    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608630    9989 pod_ready.go:81] duration metric: took 2.908037ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:15.608636    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608641    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.608665    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:15.608670    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.608675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.608680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.609749    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.609755    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.609759    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.609763    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.609766    9989 round_trippers.go:580]     Audit-Id: 9d2809bc-8920-4033-a980-81e0b514b51e
	I0610 19:48:15.609768    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.609771    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.609774    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.609923    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:15.610130    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:15.610137    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.610142    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.610147    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.611124    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.611131    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.611136    9989 round_trippers.go:580]     Audit-Id: b7f93f53-711a-4909-8dfa-b5358e3edf06
	I0610 19:48:15.611163    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.611167    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.611170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.611173    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.611175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.611312    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:15.611447    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.611454    9989 pod_ready.go:81] duration metric: took 2.808014ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.611459    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.794030    9989 request.go:629] Waited for 182.512666ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794147    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794157    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.794169    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.794177    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.796912    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.796926    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.796934    9989 round_trippers.go:580]     Audit-Id: 3854ac46-1b79-4426-8236-7591cc550ae2
	I0610 19:48:15.796938    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.796942    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.796946    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.796978    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.796983    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.797082    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:15.994033    9989 request.go:629] Waited for 196.636422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994102    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994108    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.994117    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.994122    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.995838    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.995848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.995853    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.995857    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.995860    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.995863    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.995866    9989 round_trippers.go:580]     Audit-Id: 038e8b7e-5833-4987-8dec-d70fd06fd8f3
	I0610 19:48:15.995869    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.996172    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.996363    9989 pod_ready.go:92] pod "kube-proxy-v7s4q" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.996371    9989 pod_ready.go:81] duration metric: took 384.920541ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.996378    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.194182    9989 request.go:629] Waited for 197.750366ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194292    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194302    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.194312    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.194320    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.196795    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.196809    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.196822    9989 round_trippers.go:580]     Audit-Id: 038d5bdb-1b7f-4b04-89c8-33d598c4b1d6
	I0610 19:48:16.196840    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.196849    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.196855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.196880    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.196889    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.197056    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"943","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0610 19:48:16.393212    9989 request.go:629] Waited for 195.873626ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393266    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393272    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.393278    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.393282    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.395123    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:16.395136    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.395141    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.395145    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.395150    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.395153    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.395155    9989 round_trippers.go:580]     Audit-Id: ab94a6ed-7607-433e-8303-56582026becf
	I0610 19:48:16.395158    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.395272    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:16.395463    9989 pod_ready.go:92] pod "kube-scheduler-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:16.395471    9989 pod_ready.go:81] duration metric: took 399.102366ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.395478    9989 pod_ready.go:38] duration metric: took 11.315661502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:16.395490    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:48:16.395535    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:48:16.407763    9989 command_runner.go:130] > 1536
	I0610 19:48:16.407838    9989 api_server.go:72] duration metric: took 13.032244276s to wait for apiserver process to appear ...
	I0610 19:48:16.407853    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:48:16.407872    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:16.410818    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:16.410851    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:16.410855    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.410861    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.410865    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.411473    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:16.411482    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.411486    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.411489    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:16.411493    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.411496    9989 round_trippers.go:580]     Audit-Id: 9e18606b-4bce-473d-8045-05f615ea3c0b
	I0610 19:48:16.411499    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.411502    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.411504    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.411534    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:16.411563    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:16.411571    9989 api_server.go:131] duration metric: took 3.713676ms to wait for apiserver health ...
	I0610 19:48:16.411576    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:16.593917    9989 request.go:629] Waited for 182.303257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593969    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593982    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.594020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.594030    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.598338    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.598347    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.598352    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.598356    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.598359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.598362    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.598366    9989 round_trippers.go:580]     Audit-Id: 739ff66b-4603-4a26-9ed9-1936484cf2df
	I0610 19:48:16.598369    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.598986    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:16.600809    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:16.600820    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:16.600824    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:16.600827    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:16.600829    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:16.600832    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:16.600835    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:16.600838    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:16.600841    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:16.600843    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:16.600846    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:16.600849    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:16.600851    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:16.600856    9989 system_pods.go:74] duration metric: took 189.281493ms to wait for pod list to return data ...
	I0610 19:48:16.600861    9989 default_sa.go:34] waiting for default service account to be created ...
	I0610 19:48:16.794887    9989 request.go:629] Waited for 193.957918ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794986    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794997    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.795009    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.795017    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.797833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.797849    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.797856    9989 round_trippers.go:580]     Content-Length: 261
	I0610 19:48:16.797860    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.797863    9989 round_trippers.go:580]     Audit-Id: a5fbe232-e1a9-4892-a78a-2013b453a7c8
	I0610 19:48:16.797870    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.797873    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.797878    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.797881    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.797896    9989 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"809c40cb-86f1-483d-98cc-1b46432644d5","resourceVersion":"323","creationTimestamp":"2024-06-11T02:40:31Z"}}]}
	I0610 19:48:16.798039    9989 default_sa.go:45] found service account: "default"
	I0610 19:48:16.798051    9989 default_sa.go:55] duration metric: took 197.191772ms for default service account to be created ...
	I0610 19:48:16.798058    9989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 19:48:16.994131    9989 request.go:629] Waited for 196.005872ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994194    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994203    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.994251    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.994262    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.998793    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.998811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.998819    9989 round_trippers.go:580]     Audit-Id: 3a3c6305-a6bc-4dd6-990c-e7f5db70738f
	I0610 19:48:16.998824    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.998829    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.998845    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.998850    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.998853    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.999210    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:17.001028    9989 system_pods.go:86] 12 kube-system pods found
	I0610 19:48:17.001039    9989 system_pods.go:89] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:17.001043    9989 system_pods.go:89] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:17.001047    9989 system_pods.go:89] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:17.001050    9989 system_pods.go:89] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:17.001054    9989 system_pods.go:89] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:17.001057    9989 system_pods.go:89] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:17.001062    9989 system_pods.go:89] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:17.001065    9989 system_pods.go:89] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:17.001069    9989 system_pods.go:89] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:17.001072    9989 system_pods.go:89] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:17.001076    9989 system_pods.go:89] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:17.001079    9989 system_pods.go:89] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:17.001084    9989 system_pods.go:126] duration metric: took 203.027203ms to wait for k8s-apps to be running ...
	I0610 19:48:17.001090    9989 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 19:48:17.001139    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:48:17.012670    9989 system_svc.go:56] duration metric: took 11.575591ms WaitForService to wait for kubelet
	I0610 19:48:17.012687    9989 kubeadm.go:576] duration metric: took 13.637116157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:48:17.012699    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:17.194231    9989 request.go:629] Waited for 181.491134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194340    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:17.194363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:17.194370    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:17.197119    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:17.197137    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:17.197149    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:17.197156    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:17.197162    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:17.197169    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:17.197176    9989 round_trippers.go:580]     Audit-Id: d3d91bd9-0b1c-4a20-9ebb-04b5962cdbc6
	I0610 19:48:17.197183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:17.197758    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15445 chars]
	I0610 19:48:17.198317    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198329    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198338    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198342    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198348    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198354    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198359    9989 node_conditions.go:105] duration metric: took 185.662539ms to run NodePressure ...
	I0610 19:48:17.198370    9989 start.go:240] waiting for startup goroutines ...
	I0610 19:48:17.198378    9989 start.go:245] waiting for cluster config update ...
	I0610 19:48:17.198401    9989 start.go:254] writing updated cluster config ...
	I0610 19:48:17.220816    9989 out.go:177] 
	I0610 19:48:17.242724    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:17.242860    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.265195    9989 out.go:177] * Starting "multinode-353000-m02" worker node in "multinode-353000" cluster
	I0610 19:48:17.307293    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:48:17.307327    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:48:17.307547    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:48:17.307565    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:48:17.307689    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.308695    9989 start.go:360] acquireMachinesLock for multinode-353000-m02: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:48:17.308814    9989 start.go:364] duration metric: took 94.629µs to acquireMachinesLock for "multinode-353000-m02"
	I0610 19:48:17.308843    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:48:17.308851    9989 fix.go:54] fixHost starting: m02
	I0610 19:48:17.309302    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:48:17.309340    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:48:17.318771    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53805
	I0610 19:48:17.319159    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:48:17.319519    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:48:17.319536    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:48:17.319731    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:48:17.319893    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.319997    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:48:17.320076    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.320165    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:48:17.321139    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.321165    9989 fix.go:112] recreateIfNeeded on multinode-353000-m02: state=Stopped err=<nil>
	I0610 19:48:17.321176    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	W0610 19:48:17.321267    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:48:17.342117    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000-m02" ...
	I0610 19:48:17.384293    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .Start
	I0610 19:48:17.384586    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.384618    9989 main.go:141] libmachine: (multinode-353000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid
	I0610 19:48:17.386481    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.386504    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | pid 9545 is in state "Stopped"
	I0610 19:48:17.386538    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid...
	I0610 19:48:17.386916    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Using UUID 3b15a703-00dc-45e7-88e9-620fa037ae16
	I0610 19:48:17.404856    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Generated MAC 9a:45:71:59:94:c9
	I0610 19:48:17.404885    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:48:17.405069    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405097    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405170    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3b15a703-00dc-45e7-88e9-620fa037ae16", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/j
enkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:48:17.405218    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3b15a703-00dc-45e7-88e9-620fa037ae16 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/mult
inode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:48:17.405234    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:48:17.406727    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Pid is 10028
	I0610 19:48:17.407115    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 0
	I0610 19:48:17.407129    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.407257    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 10028
	I0610 19:48:17.409351    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:48:17.409467    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:48:17.409488    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690bdc}
	I0610 19:48:17.409512    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:48:17.409523    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:48:17.409543    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found match: 9a:45:71:59:94:c9
	I0610 19:48:17.409570    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | IP: 192.169.0.20
	I0610 19:48:17.409579    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:48:17.410301    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:17.410512    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.410985    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:48:17.410995    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.411096    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:17.411190    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:17.411313    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411449    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411555    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:17.411688    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:17.411842    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:17.411849    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:48:17.415070    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:48:17.423513    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:48:17.424462    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.424485    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.424494    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.424500    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.810455    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:48:17.810477    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:48:17.925056    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.925078    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.925090    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.925102    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.925970    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:48:17.925981    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:48:23.237466    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:48:23.237549    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:48:23.237560    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:48:23.261554    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:48:52.481015    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:48:52.481029    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481167    9989 buildroot.go:166] provisioning hostname "multinode-353000-m02"
	I0610 19:48:52.481180    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481288    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.481384    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.481465    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481540    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481624    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.481764    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.481913    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.481922    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000-m02 && echo "multinode-353000-m02" | sudo tee /etc/hostname
	I0610 19:48:52.555898    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m02
	
	I0610 19:48:52.555912    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.556047    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.556155    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556244    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556351    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.556487    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.556669    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.556682    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:48:52.627006    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:48:52.627024    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:48:52.627038    9989 buildroot.go:174] setting up certificates
	I0610 19:48:52.627044    9989 provision.go:84] configureAuth start
	I0610 19:48:52.627052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.627185    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:52.627290    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.627382    9989 provision.go:143] copyHostCerts
	I0610 19:48:52.627410    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627456    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:48:52.627462    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627594    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:48:52.627791    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627821    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:48:52.627825    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627924    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:48:52.628081    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628109    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:48:52.628113    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628206    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:48:52.628383    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m02 san=[127.0.0.1 192.169.0.20 localhost minikube multinode-353000-m02]
	I0610 19:48:52.864621    9989 provision.go:177] copyRemoteCerts
	I0610 19:48:52.864670    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:48:52.864684    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.864871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.865093    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.865223    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.865370    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:52.902301    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:48:52.902374    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:48:52.922200    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:48:52.922272    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 19:48:52.942419    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:48:52.942486    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:48:52.961961    9989 provision.go:87] duration metric: took 334.921541ms to configureAuth
	I0610 19:48:52.961973    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:48:52.962132    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:52.962145    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:52.962271    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.962375    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.962471    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962561    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962649    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.962765    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.962891    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.962899    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:48:53.026409    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:48:53.026421    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:48:53.026513    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:48:53.026532    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.026664    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.026757    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026854    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026936    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.027075    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.027217    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.027260    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.19"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:48:53.101854    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.19
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:48:53.101871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.102004    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.102084    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102159    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102254    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.102385    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.102564    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.102577    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:48:54.746316    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:48:54.746329    9989 machine.go:97] duration metric: took 37.336632265s to provisionDockerMachine
	I0610 19:48:54.746338    9989 start.go:293] postStartSetup for "multinode-353000-m02" (driver="hyperkit")
	I0610 19:48:54.746346    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:48:54.746364    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.746553    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:48:54.746573    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.746671    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.746768    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.746849    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.746924    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.784393    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:48:54.787362    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:48:54.787371    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:48:54.787375    9989 command_runner.go:130] > ID=buildroot
	I0610 19:48:54.787379    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:48:54.787385    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:48:54.787467    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:48:54.787474    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:48:54.787570    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:48:54.787737    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:48:54.787743    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:48:54.787933    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:48:54.795249    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:48:54.815317    9989 start.go:296] duration metric: took 68.971403ms for postStartSetup
	I0610 19:48:54.815337    9989 fix.go:56] duration metric: took 37.507788969s for fixHost
	I0610 19:48:54.815352    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.815497    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.815593    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815691    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815780    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.815896    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:54.816039    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:54.816046    9989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 19:48:54.878000    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718074135.243306878
	
	I0610 19:48:54.878010    9989 fix.go:216] guest clock: 1718074135.243306878
	I0610 19:48:54.878017    9989 fix.go:229] Guest: 2024-06-10 19:48:55.243306878 -0700 PDT Remote: 2024-06-10 19:48:54.815342 -0700 PDT m=+195.166531099 (delta=427.964878ms)
	I0610 19:48:54.878027    9989 fix.go:200] guest clock delta is within tolerance: 427.964878ms
	I0610 19:48:54.878031    9989 start.go:83] releasing machines lock for "multinode-353000-m02", held for 37.570510595s
	I0610 19:48:54.878052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.878188    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:54.899842    9989 out.go:177] * Found network options:
	I0610 19:48:54.920775    9989 out.go:177]   - NO_PROXY=192.169.0.19
	W0610 19:48:54.941666    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.941707    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942405    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942613    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942729    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:48:54.942761    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	W0610 19:48:54.942841    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.942952    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.942957    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:48:54.942979    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.943187    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943226    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.943428    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943489    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943627    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943669    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.943798    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.979160    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:48:54.979221    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:48:54.979276    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:48:55.024346    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:48:55.024519    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:48:55.024548    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:48:55.024558    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.024672    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.039727    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:48:55.039987    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:48:55.049027    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:48:55.058181    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.058230    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:48:55.067256    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.076291    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:48:55.085310    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.094333    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:48:55.103537    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:48:55.112676    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:48:55.121615    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:48:55.130814    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:48:55.139162    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:48:55.139338    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:48:55.147700    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.246020    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:48:55.266428    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.266504    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:48:55.279486    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:48:55.279959    9989 command_runner.go:130] > [Unit]
	I0610 19:48:55.279969    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:48:55.279974    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:48:55.279987    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:48:55.279992    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:48:55.279996    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:48:55.280000    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:48:55.280003    9989 command_runner.go:130] > [Service]
	I0610 19:48:55.280006    9989 command_runner.go:130] > Type=notify
	I0610 19:48:55.280014    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:48:55.280019    9989 command_runner.go:130] > Environment=NO_PROXY=192.169.0.19
	I0610 19:48:55.280025    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:48:55.280036    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:48:55.280044    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:48:55.280049    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:48:55.280056    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:48:55.280061    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:48:55.280067    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:48:55.280078    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:48:55.280085    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:48:55.280088    9989 command_runner.go:130] > ExecStart=
	I0610 19:48:55.280100    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:48:55.280104    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:48:55.280112    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:48:55.280118    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:48:55.280122    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:48:55.280124    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:48:55.280128    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:48:55.280136    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:48:55.280141    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:48:55.280145    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:48:55.280149    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:48:55.280154    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:48:55.280158    9989 command_runner.go:130] > Delegate=yes
	I0610 19:48:55.280163    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:48:55.280170    9989 command_runner.go:130] > KillMode=process
	I0610 19:48:55.280175    9989 command_runner.go:130] > [Install]
	I0610 19:48:55.280181    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:48:55.280416    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.297490    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:48:55.315143    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.326478    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.337749    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:48:55.355043    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.365212    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.380927    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:48:55.381306    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:48:55.384049    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:48:55.384254    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:48:55.391544    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:48:55.404989    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:48:55.503276    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:48:55.597218    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.597255    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:48:55.612389    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.702999    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:49:56.756006    9989 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0610 19:49:56.756023    9989 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0610 19:49:56.756031    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.055138149s)
	I0610 19:49:56.756087    9989 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 19:49:56.764935    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.764947    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	I0610 19:49:56.764956    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	I0610 19:49:56.764968    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0610 19:49:56.764978    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 19:49:56.764989    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 19:49:56.765000    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765011    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 19:49:56.765020    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765044    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765058    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765082    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765093    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765103    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 19:49:56.765113    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765122    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765131    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765146    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765155    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765181    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765190    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 19:49:56.765199    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 19:49:56.765208    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 19:49:56.765218    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	I0610 19:49:56.765227    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 19:49:56.765235    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 19:49:56.765246    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 19:49:56.765255    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 19:49:56.765264    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 19:49:56.765273    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 19:49:56.765282    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765291    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765300    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 19:49:56.765308    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 19:49:56.765318    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765327    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765336    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765345    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765356    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765365    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765391    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765402    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765411    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765420    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765435    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765443    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765452    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765460    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765469    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765478    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765487    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765497    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765505    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765514    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765523    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765532    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 19:49:56.765540    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765549    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765558    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765568    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765580    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765838    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765857    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 19:49:56.765870    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765878    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 19:49:56.765888    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	I0610 19:49:56.765896    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 19:49:56.765905    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 19:49:56.765913    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 19:49:56.765921    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	I0610 19:49:56.765929    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 19:49:56.765936    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	I0610 19:49:56.765949    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 19:49:56.765956    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	I0610 19:49:56.765966    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 19:49:56.765973    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	I0610 19:49:56.765980    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 19:49:56.765987    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	I0610 19:49:56.765993    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	I0610 19:49:56.765998    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0610 19:49:56.766006    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	I0610 19:49:56.766015    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 19:49:56.766026    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	I0610 19:49:56.766038    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 19:49:56.766047    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 19:49:56.766063    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0610 19:49:56.766074    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0610 19:49:56.766107    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.766115    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	I0610 19:49:56.766124    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0610 19:49:56.766133    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 19:49:56.766140    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0610 19:49:56.766146    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0610 19:49:56.790586    9989 out.go:177] 
	W0610 19:49:56.812421    9989 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 19:49:56.812533    9989 out.go:239] * 
	* 
	W0610 19:49:56.813811    9989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 19:49:56.877394    9989 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-353000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-353000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-353000 -n multinode-353000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 logs -n 25: (2.696268482s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m03 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp testdata/cp-test.txt                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02:/home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m02 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-353000 node stop m03                                                                                             | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	| node    | multinode-353000 node start                                                                                                | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                                 |                  |         |         |                     |                     |
	| node    | list -p multinode-353000                                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT |                     |
	| stop    | -p multinode-353000                                                                                                        | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT | 10 Jun 24 19:45 PDT |
	| start   | -p multinode-353000                                                                                                        | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT |                     |
	|         | --wait=true -v=8                                                                                                           |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                          |                  |         |         |                     |                     |
	| node    | list -p multinode-353000                                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:49 PDT |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 19:45:39
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 19:45:39.692404    9989 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:45:39.692578    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692584    9989 out.go:304] Setting ErrFile to fd 2...
	I0610 19:45:39.692587    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692759    9989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:45:39.694238    9989 out.go:298] Setting JSON to false
	I0610 19:45:39.716699    9989 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26095,"bootTime":1718047844,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:45:39.716794    9989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:45:39.738878    9989 out.go:177] * [multinode-353000] minikube v1.33.1 on Darwin 14.4.1
	I0610 19:45:39.781353    9989 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:45:39.781374    9989 notify.go:220] Checking for updates...
	I0610 19:45:39.824429    9989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:45:39.845512    9989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:45:39.866367    9989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:45:39.887316    9989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:45:39.908278    9989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:45:39.929733    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:39.929854    9989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:45:39.930309    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:39.930346    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:39.939199    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53775
	I0610 19:45:39.939566    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:39.939970    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:39.939978    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:39.940198    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:39.940315    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:39.969508    9989 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 19:45:40.011453    9989 start.go:297] selected driver: hyperkit
	I0610 19:45:40.011484    9989 start.go:901] validating driver "hyperkit" against &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.011697    9989 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:45:40.011899    9989 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.012122    9989 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 19:45:40.022075    9989 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 19:45:40.025893    9989 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.025915    9989 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 19:45:40.028541    9989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:45:40.028616    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:45:40.028625    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:45:40.028709    9989 start.go:340] cluster config:
	{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.028811    9989 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.071375    9989 out.go:177] * Starting "multinode-353000" primary control-plane node in "multinode-353000" cluster
	I0610 19:45:40.092477    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:45:40.092569    9989 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 19:45:40.092595    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:45:40.092792    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:45:40.092810    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:45:40.092980    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.093894    9989 start.go:360] acquireMachinesLock for multinode-353000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:45:40.094018    9989 start.go:364] duration metric: took 96.418µs to acquireMachinesLock for "multinode-353000"
	I0610 19:45:40.094053    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:45:40.094073    9989 fix.go:54] fixHost starting: 
	I0610 19:45:40.094498    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.094536    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:40.103465    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53777
	I0610 19:45:40.103833    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:40.104164    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:40.104180    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:40.104403    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:40.104528    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.104641    9989 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:45:40.104724    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.104851    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:45:40.105788    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.105820    9989 fix.go:112] recreateIfNeeded on multinode-353000: state=Stopped err=<nil>
	I0610 19:45:40.105834    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	W0610 19:45:40.105913    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:45:40.148276    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000" ...
	I0610 19:45:40.169332    9989 main.go:141] libmachine: (multinode-353000) Calling .Start
	I0610 19:45:40.169590    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.169632    9989 main.go:141] libmachine: (multinode-353000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid
	I0610 19:45:40.171495    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.171526    9989 main.go:141] libmachine: (multinode-353000) DBG | pid 9523 is in state "Stopped"
	I0610 19:45:40.171559    9989 main.go:141] libmachine: (multinode-353000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid...
	I0610 19:45:40.171882    9989 main.go:141] libmachine: (multinode-353000) DBG | Using UUID f0e955cd-5ea6-4315-ac08-1f17bf5837e0
	I0610 19:45:40.275926    9989 main.go:141] libmachine: (multinode-353000) DBG | Generated MAC 6e:10:a7:68:76:8c
	I0610 19:45:40.275947    9989 main.go:141] libmachine: (multinode-353000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:45:40.276073    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276103    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276164    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f0e955cd-5ea6-4315-ac08-1f17bf5837e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/1904
6-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:45:40.276203    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f0e955cd-5ea6-4315-ac08-1f17bf5837e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:45:40.276224    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:45:40.277704    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Pid is 10002
	I0610 19:45:40.278259    9989 main.go:141] libmachine: (multinode-353000) DBG | Attempt 0
	I0610 19:45:40.278270    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.278351    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 10002
	I0610 19:45:40.279973    9989 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:45:40.280067    9989 main.go:141] libmachine: (multinode-353000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:45:40.280108    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:45:40.280134    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:45:40.280161    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:45:40.280185    9989 main.go:141] libmachine: (multinode-353000) DBG | Found match: 6e:10:a7:68:76:8c
	I0610 19:45:40.280206    9989 main.go:141] libmachine: (multinode-353000) DBG | IP: 192.169.0.19
	I0610 19:45:40.280241    9989 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:45:40.280942    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:40.281154    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.281614    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:45:40.281625    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.281737    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:40.281835    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:40.281925    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282030    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282140    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:40.282302    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:40.282507    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:40.282515    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:45:40.285439    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:45:40.338413    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:45:40.339064    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.339085    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.339092    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.339099    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.721279    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:45:40.721293    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:45:40.835864    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.835901    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.835915    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.835928    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.836766    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:45:40.836785    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:45:46.073475    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:45:46.073515    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:45:46.073529    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:45:46.097300    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:45:51.340943    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:45:51.340958    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341127    9989 buildroot.go:166] provisioning hostname "multinode-353000"
	I0610 19:45:51.341138    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341240    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.341331    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.341432    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341515    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.341733    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.341882    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.341891    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000 && echo "multinode-353000" | sudo tee /etc/hostname
	I0610 19:45:51.407130    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000
	
	I0610 19:45:51.407155    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.407278    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.407374    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407468    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407561    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.407694    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.407848    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.407859    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:45:51.468420    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:45:51.468442    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:45:51.468459    9989 buildroot.go:174] setting up certificates
	I0610 19:45:51.468467    9989 provision.go:84] configureAuth start
	I0610 19:45:51.468474    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.468599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:51.468700    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.468783    9989 provision.go:143] copyHostCerts
	I0610 19:45:51.468813    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.468881    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:45:51.468890    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.469023    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:45:51.469222    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469262    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:45:51.469268    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469346    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:45:51.469495    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469543    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:45:51.469552    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469665    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:45:51.469841    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-353000]
	I0610 19:45:51.574939    9989 provision.go:177] copyRemoteCerts
	I0610 19:45:51.575027    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:45:51.575057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.575258    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.575433    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.575607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.575800    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:51.610260    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:45:51.610345    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 19:45:51.630147    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:45:51.630204    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:45:51.650528    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:45:51.650589    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:45:51.670054    9989 provision.go:87] duration metric: took 201.581041ms to configureAuth
	I0610 19:45:51.670067    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:45:51.670242    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:51.670255    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:51.670386    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.670503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.670607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670720    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670803    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.670922    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.671045    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.671053    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:45:51.726480    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:45:51.726495    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:45:51.726575    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:45:51.726593    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.726736    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.726853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.726941    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.727024    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.727157    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.727300    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.727345    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:45:51.793222    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:45:51.793246    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.793378    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.793475    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793564    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793652    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.793772    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.793927    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.793939    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:45:53.421030    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:45:53.421054    9989 machine.go:97] duration metric: took 13.139887748s to provisionDockerMachine
	I0610 19:45:53.421087    9989 start.go:293] postStartSetup for "multinode-353000" (driver="hyperkit")
	I0610 19:45:53.421100    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:45:53.421124    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.421309    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:45:53.421321    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.421404    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.421503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.421591    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.421689    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.456942    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:45:53.459812    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:45:53.459822    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:45:53.459827    9989 command_runner.go:130] > ID=buildroot
	I0610 19:45:53.459833    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:45:53.459840    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:45:53.459988    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:45:53.459999    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:45:53.460114    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:45:53.460308    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:45:53.460314    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:45:53.460524    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:45:53.467718    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:45:53.486520    9989 start.go:296] duration metric: took 65.424192ms for postStartSetup
	I0610 19:45:53.486540    9989 fix.go:56] duration metric: took 13.392941824s for fixHost
	I0610 19:45:53.486552    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.486683    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.486777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486935    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.487060    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:53.487195    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:53.487202    9989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:45:53.540939    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073953.908242527
	
	I0610 19:45:53.540950    9989 fix.go:216] guest clock: 1718073953.908242527
	I0610 19:45:53.540963    9989 fix.go:229] Guest: 2024-06-10 19:45:53.908242527 -0700 PDT Remote: 2024-06-10 19:45:53.486543 -0700 PDT m=+13.831437270 (delta=421.699527ms)
	I0610 19:45:53.540982    9989 fix.go:200] guest clock delta is within tolerance: 421.699527ms
	I0610 19:45:53.540986    9989 start.go:83] releasing machines lock for "multinode-353000", held for 13.447423727s
	I0610 19:45:53.541004    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541129    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:53.541236    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541536    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541646    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541706    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:45:53.541734    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541762    9989 ssh_runner.go:195] Run: cat /version.json
	I0610 19:45:53.541777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541836    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541857    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541939    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.541956    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.542057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542069    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542145    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.542159    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.621904    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:45:53.622832    9989 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 19:45:53.623012    9989 ssh_runner.go:195] Run: systemctl --version
	I0610 19:45:53.628064    9989 command_runner.go:130] > systemd 252 (252)
	I0610 19:45:53.628086    9989 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 19:45:53.628210    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:45:53.632390    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:45:53.632443    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:45:53.632487    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:45:53.644499    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:45:53.644515    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:45:53.644525    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.644620    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:53.659247    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:45:53.659535    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:45:53.668457    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:45:53.677198    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:45:53.677239    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:45:53.685876    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.694608    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:45:53.703186    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.711800    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:45:53.720598    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:45:53.729427    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:45:53.738123    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:45:53.747019    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:45:53.754733    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:45:53.754901    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:45:53.762666    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:53.871758    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:45:53.891305    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.891381    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:45:53.902978    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:45:53.903571    9989 command_runner.go:130] > [Unit]
	I0610 19:45:53.903596    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:45:53.903615    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:45:53.903621    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:45:53.903625    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:45:53.903632    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:45:53.903636    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:45:53.903639    9989 command_runner.go:130] > [Service]
	I0610 19:45:53.903642    9989 command_runner.go:130] > Type=notify
	I0610 19:45:53.903647    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:45:53.903653    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:45:53.903663    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:45:53.903670    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:45:53.903675    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:45:53.903681    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:45:53.903687    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:45:53.903693    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:45:53.903705    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:45:53.903711    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:45:53.903716    9989 command_runner.go:130] > ExecStart=
	I0610 19:45:53.903727    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:45:53.903732    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:45:53.903739    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:45:53.903744    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:45:53.903748    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:45:53.903751    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:45:53.903755    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:45:53.903763    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:45:53.903768    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:45:53.903771    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:45:53.903775    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:45:53.903780    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:45:53.903783    9989 command_runner.go:130] > Delegate=yes
	I0610 19:45:53.903788    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:45:53.903792    9989 command_runner.go:130] > KillMode=process
	I0610 19:45:53.903795    9989 command_runner.go:130] > [Install]
	I0610 19:45:53.903804    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:45:53.903867    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.918134    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:45:53.937012    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.947454    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.957667    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:45:53.978657    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.989706    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:54.004573    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:45:54.004840    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:45:54.007767    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:45:54.007939    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:45:54.015068    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:45:54.028412    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:45:54.125186    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:45:54.244241    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:45:54.244317    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:45:54.259051    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:54.351224    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:45:56.651603    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30043865s)
	I0610 19:45:56.651667    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 19:45:56.662260    9989 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 19:47:54.346370    9989 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m57.688173109s)
	I0610 19:47:54.346439    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.357366    9989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 19:47:54.453493    9989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 19:47:54.558404    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.660727    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 19:47:54.674518    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.685725    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.789246    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 19:47:54.849081    9989 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 19:47:54.849165    9989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 19:47:54.853149    9989 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 19:47:54.853161    9989 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 19:47:54.853166    9989 command_runner.go:130] > Device: 0,22	Inode: 754         Links: 1
	I0610 19:47:54.853172    9989 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 19:47:54.853177    9989 command_runner.go:130] > Access: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853185    9989 command_runner.go:130] > Modify: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853193    9989 command_runner.go:130] > Change: 2024-06-11 02:47:55.210828405 +0000
	I0610 19:47:54.853197    9989 command_runner.go:130] >  Birth: -
	I0610 19:47:54.853348    9989 start.go:562] Will wait 60s for crictl version
	I0610 19:47:54.853398    9989 ssh_runner.go:195] Run: which crictl
	I0610 19:47:54.856865    9989 command_runner.go:130] > /usr/bin/crictl
	I0610 19:47:54.856953    9989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 19:47:54.886614    9989 command_runner.go:130] > Version:  0.1.0
	I0610 19:47:54.886666    9989 command_runner.go:130] > RuntimeName:  docker
	I0610 19:47:54.886674    9989 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 19:47:54.886680    9989 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 19:47:54.887717    9989 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 19:47:54.887786    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.903316    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.904109    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.921823    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.965802    9989 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 19:47:54.965890    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:47:54.966288    9989 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0610 19:47:54.971034    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:54.981371    9989 kubeadm.go:877] updating cluster {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 19:47:54.981452    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:47:54.981509    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:54.993718    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:54.993732    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:54.993737    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:54.993741    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:54.993744    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:54.993748    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:54.993753    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:54.993756    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:54.993761    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:54.993765    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:54.994255    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:54.994266    9989 docker.go:615] Images already preloaded, skipping extraction
	I0610 19:47:54.994336    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:55.006339    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:55.006352    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:55.006356    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:55.006360    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:55.006363    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:55.006379    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:55.006385    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:55.006390    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:55.006394    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:55.006398    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:55.006906    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:55.006921    9989 cache_images.go:84] Images are preloaded, skipping loading
	I0610 19:47:55.006932    9989 kubeadm.go:928] updating node { 192.169.0.19 8443 v1.30.1 docker true true} ...
	I0610 19:47:55.007008    9989 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 19:47:55.007079    9989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 19:47:55.025485    9989 command_runner.go:130] > cgroupfs
	I0610 19:47:55.026122    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:47:55.026131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:47:55.026139    9989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 19:47:55.026158    9989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353000 NodeName:multinode-353000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 19:47:55.026249    9989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-353000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 19:47:55.026311    9989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 19:47:55.034754    9989 command_runner.go:130] > kubeadm
	I0610 19:47:55.034764    9989 command_runner.go:130] > kubectl
	I0610 19:47:55.034767    9989 command_runner.go:130] > kubelet
	I0610 19:47:55.034842    9989 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 19:47:55.034886    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 19:47:55.042800    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0610 19:47:55.056385    9989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 19:47:55.069690    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0610 19:47:55.083214    9989 ssh_runner.go:195] Run: grep 192.169.0.19	control-plane.minikube.internal$ /etc/hosts
	I0610 19:47:55.086096    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:55.096237    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:55.195683    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:47:55.209046    9989 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000 for IP: 192.169.0.19
	I0610 19:47:55.209070    9989 certs.go:194] generating shared ca certs ...
	I0610 19:47:55.209087    9989 certs.go:226] acquiring lock for ca certs: {Name:mkb8782270d93d160af8329e99f7f211e7b6b737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.209270    9989 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key
	I0610 19:47:55.209345    9989 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key
	I0610 19:47:55.209355    9989 certs.go:256] generating profile certs ...
	I0610 19:47:55.209458    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key
	I0610 19:47:55.209537    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6
	I0610 19:47:55.209630    9989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key
	I0610 19:47:55.209637    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 19:47:55.209659    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 19:47:55.209677    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 19:47:55.209695    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 19:47:55.209716    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 19:47:55.209746    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 19:47:55.209778    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 19:47:55.209796    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 19:47:55.209888    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem (1338 bytes)
	W0610 19:47:55.209936    9989 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485_empty.pem, impossibly tiny 0 bytes
	I0610 19:47:55.209945    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 19:47:55.209987    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem (1082 bytes)
	I0610 19:47:55.210029    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem (1123 bytes)
	I0610 19:47:55.210067    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem (1679 bytes)
	I0610 19:47:55.210150    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:47:55.210197    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem -> /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.210218    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.210236    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.210677    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 19:47:55.243710    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 19:47:55.274291    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 19:47:55.304150    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 19:47:55.327241    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 19:47:55.347168    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 19:47:55.366973    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 19:47:55.386745    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 19:47:55.406837    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem --> /usr/share/ca-certificates/6485.pem (1338 bytes)
	I0610 19:47:55.426587    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /usr/share/ca-certificates/64852.pem (1708 bytes)
	I0610 19:47:55.446314    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 19:47:55.466320    9989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 19:47:55.480094    9989 ssh_runner.go:195] Run: openssl version
	I0610 19:47:55.484173    9989 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 19:47:55.484381    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6485.pem && ln -fs /usr/share/ca-certificates/6485.pem /etc/ssl/certs/6485.pem"
	I0610 19:47:55.492857    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496253    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496359    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496397    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.500429    9989 command_runner.go:130] > 51391683
	I0610 19:47:55.500562    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6485.pem /etc/ssl/certs/51391683.0"
	I0610 19:47:55.508913    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64852.pem && ln -fs /usr/share/ca-certificates/64852.pem /etc/ssl/certs/64852.pem"
	I0610 19:47:55.517404    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520837    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520969    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.521015    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.525079    9989 command_runner.go:130] > 3ec20f2e
	I0610 19:47:55.525226    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 19:47:55.533665    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 19:47:55.542055    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545479    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545578    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545613    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.549597    9989 command_runner.go:130] > b5213941
	I0610 19:47:55.549850    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 19:47:55.558357    9989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561717    9989 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561732    9989 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 19:47:55.561740    9989 command_runner.go:130] > Device: 253,1	Inode: 8384328     Links: 1
	I0610 19:47:55.561749    9989 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:47:55.561758    9989 command_runner.go:130] > Access: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561763    9989 command_runner.go:130] > Modify: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561770    9989 command_runner.go:130] > Change: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561776    9989 command_runner.go:130] >  Birth: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561913    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 19:47:55.566014    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.566161    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 19:47:55.570209    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.570381    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 19:47:55.574601    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.574837    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 19:47:55.578866    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.579032    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 19:47:55.583114    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.583281    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 19:47:55.587426    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.587558    9989 kubeadm.go:391] StartCluster: {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:47:55.587674    9989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.599645    9989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 19:47:55.607448    9989 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 19:47:55.607459    9989 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 19:47:55.607466    9989 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 19:47:55.607470    9989 command_runner.go:130] > member
	W0610 19:47:55.607549    9989 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 19:47:55.607559    9989 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 19:47:55.607568    9989 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 19:47:55.607620    9989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 19:47:55.615074    9989 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:47:55.615382    9989 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-353000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.615468    9989 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-5942/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-353000" cluster setting kubeconfig missing "multinode-353000" context setting]
	I0610 19:47:55.615649    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.616397    9989 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.616577    9989 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x89f9600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:47:55.616926    9989 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 19:47:55.617061    9989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 19:47:55.624482    9989 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.19
	I0610 19:47:55.624500    9989 kubeadm.go:1154] stopping kube-system containers ...
	I0610 19:47:55.624549    9989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.638294    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.638306    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.638309    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.638314    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.638319    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.638322    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.638326    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.638329    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.638332    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.638345    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.638349    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.638352    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.638355    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.638358    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.638362    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.638365    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.638951    9989 docker.go:483] Stopping containers: [deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a]
	I0610 19:47:55.639021    9989 ssh_runner.go:195] Run: docker stop deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a
	I0610 19:47:55.653484    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.653495    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.653500    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.653503    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.653506    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.653624    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.653629    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.653632    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.653791    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.653797    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.653800    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.653803    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.653806    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.653844    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.653850    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.653853    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.654638    9989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 19:47:55.667514    9989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 19:47:55.674892    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 19:47:55.674904    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 19:47:55.674910    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 19:47:55.674930    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674992    9989 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674999    9989 kubeadm.go:156] found existing configuration files:
	
	I0610 19:47:55.675040    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 19:47:55.682287    9989 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682303    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682341    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 19:47:55.689835    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 19:47:55.696884    9989 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696902    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696953    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 19:47:55.704404    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.711485    9989 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711508    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711548    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.718937    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 19:47:55.726127    9989 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726146    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726181    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 19:47:55.733619    9989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 19:47:55.741255    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.804058    9989 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 19:47:55.804120    9989 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 19:47:55.804305    9989 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 19:47:55.804483    9989 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 19:47:55.804689    9989 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 19:47:55.804862    9989 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 19:47:55.805120    9989 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 19:47:55.805265    9989 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 19:47:55.805411    9989 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 19:47:55.805605    9989 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 19:47:55.805743    9989 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 19:47:55.806676    9989 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 19:47:55.806774    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.845988    9989 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 19:47:55.886933    9989 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 19:47:56.013943    9989 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 19:47:56.065755    9989 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 19:47:56.199902    9989 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 19:47:56.356026    9989 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 19:47:56.358145    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.407409    9989 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:47:56.408002    9989 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:47:56.408066    9989 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 19:47:56.513337    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.563955    9989 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 19:47:56.563969    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 19:47:56.570350    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 19:47:56.570701    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 19:47:56.571965    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.651317    9989 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 19:47:56.653781    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:47:56.653842    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.154036    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.654114    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.666427    9989 command_runner.go:130] > 1536
	I0610 19:47:57.666488    9989 api_server.go:72] duration metric: took 1.012757588s to wait for apiserver process to appear ...
	I0610 19:47:57.666498    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:47:57.666515    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.438002    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.438019    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.438029    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.455738    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.455759    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.667766    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.672313    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:47:59.672324    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.166779    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.171966    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:48:00.171979    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.666724    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.671558    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:00.671622    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:00.671627    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:00.671635    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:00.671638    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:00.683001    9989 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 19:48:00.683015    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:00.683020    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:00.683023    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:00.683026    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:00.683029    9989 round_trippers.go:580]     Audit-Id: 09da700d-8425-4926-9374-2d6528bd7bb9
	I0610 19:48:00.683033    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:00.683035    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:00.683038    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:00.683058    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:00.683109    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:00.683119    9989 api_server.go:131] duration metric: took 3.016721791s to wait for apiserver health ...
	I0610 19:48:00.683126    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:48:00.683131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:48:00.722329    9989 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 19:48:00.744311    9989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 19:48:00.748261    9989 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 19:48:00.748273    9989 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 19:48:00.748278    9989 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 19:48:00.748283    9989 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:48:00.748290    9989 command_runner.go:130] > Access: 2024-06-11 02:45:50.361198634 +0000
	I0610 19:48:00.748295    9989 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 19:48:00.748300    9989 command_runner.go:130] > Change: 2024-06-11 02:45:47.690352312 +0000
	I0610 19:48:00.748303    9989 command_runner.go:130] >  Birth: -
	I0610 19:48:00.748470    9989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 19:48:00.748478    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 19:48:00.778024    9989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 19:48:01.117060    9989 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.147629    9989 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.301672    9989 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 19:48:01.356197    9989 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 19:48:01.357762    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:01.357819    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.357825    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.357831    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.357834    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.361084    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:01.361095    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.361101    9989 round_trippers.go:580]     Audit-Id: 0a68b78a-1971-4606-9c89-6dd28309d599
	I0610 19:48:01.361107    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.361112    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.361115    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.361118    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.361121    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.362367    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.365313    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:01.365340    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 19:48:01.365347    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 19:48:01.365352    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:01.365356    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 19:48:01.365362    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:01.365367    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 19:48:01.365371    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 19:48:01.365374    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:01.365377    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:01.365381    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0610 19:48:01.365385    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 19:48:01.365390    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0610 19:48:01.365395    9989 system_pods.go:74] duration metric: took 7.626153ms to wait for pod list to return data ...
	I0610 19:48:01.365403    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:01.365440    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:01.365444    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.365450    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.365454    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.367622    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.367635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.367640    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.367653    9989 round_trippers.go:580]     Audit-Id: 9ef6ecc8-1407-4850-b836-c92476875d2b
	I0610 19:48:01.367661    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.367666    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.367671    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.367674    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.367975    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15572 chars]
	I0610 19:48:01.368527    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368541    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368549    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368552    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368556    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368559    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368563    9989 node_conditions.go:105] duration metric: took 3.15591ms to run NodePressure ...
	I0610 19:48:01.368573    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:48:01.551683    9989 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 19:48:01.669147    9989 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 19:48:01.670157    9989 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 19:48:01.670212    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0610 19:48:01.670218    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.670224    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.670227    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.674624    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:01.674636    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.674641    9989 round_trippers.go:580]     Audit-Id: c47f63c6-e6e7-4d8d-b049-a6e6efe1f028
	I0610 19:48:01.674644    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.674650    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.674654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.674656    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.674659    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.675233    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0610 19:48:01.675943    9989 kubeadm.go:733] kubelet initialised
	I0610 19:48:01.675953    9989 kubeadm.go:734] duration metric: took 5.786634ms waiting for restarted kubelet to initialise ...
	I0610 19:48:01.675959    9989 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:01.676001    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.676006    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.676012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.676015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.678521    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.678536    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.678546    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.678551    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.678555    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.678558    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.678562    9989 round_trippers.go:580]     Audit-Id: 695aab2d-7185-4ab8-93db-4232865056b6
	I0610 19:48:01.678564    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.679581    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"916"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.681433    9989 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.681482    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:01.681487    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.681493    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.681497    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.683281    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.683286    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.683290    9989 round_trippers.go:580]     Audit-Id: ebbbfe81-a38f-4a3c-8e5c-90703473f744
	I0610 19:48:01.683293    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.683296    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.683308    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.683313    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.683316    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.683580    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:01.683874    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.683881    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.683887    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.683891    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686546    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.686555    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.686561    9989 round_trippers.go:580]     Audit-Id: 2892fe1d-d0a8-4261-8bf0-3133e5e2a446
	I0610 19:48:01.686565    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.686568    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.686571    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.686575    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.686578    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.686656    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.686844    9989 pod_ready.go:97] node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686854    9989 pod_ready.go:81] duration metric: took 5.411979ms for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.686861    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686867    9989 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.686904    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:01.686909    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.686915    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686918    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.688977    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.688986    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.688991    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.688996    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.689002    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.689007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.689011    9989 round_trippers.go:580]     Audit-Id: 3ace8889-aedb-4a19-9411-27b71b8a2e0b
	I0610 19:48:01.689015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.689291    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:01.689535    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.689542    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.689547    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.689550    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.690829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.690836    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.690841    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.690845    9989 round_trippers.go:580]     Audit-Id: 2f32a662-31a6-4053-8a84-be837537cd4c
	I0610 19:48:01.690848    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.690851    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.690855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.690858    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.691071    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.691242    9989 pod_ready.go:97] node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691252    9989 pod_ready.go:81] duration metric: took 4.380161ms for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.691258    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691269    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.691301    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:01.691306    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.691311    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.691315    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.692447    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.692457    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.692462    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.692466    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.692469    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.692471    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.692474    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.692476    9989 round_trippers.go:580]     Audit-Id: bad7c45b-bf08-4758-a569-97c3dc9eafb6
	I0610 19:48:01.692666    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"893","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0610 19:48:01.692920    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.692926    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.692932    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.692936    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.694073    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.694081    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.694086    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.694089    9989 round_trippers.go:580]     Audit-Id: 98fa13c5-25d7-4e14-b2a2-7560361baffd
	I0610 19:48:01.694092    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.694095    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.694098    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.694100    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.694341    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.694500    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694509    9989 pod_ready.go:81] duration metric: took 3.23437ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.694514    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694519    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.694545    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:01.694549    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.694555    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.694559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.695753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.695761    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.695766    9989 round_trippers.go:580]     Audit-Id: a7d05f7f-1539-4d5f-9fe3-3695667a8deb
	I0610 19:48:01.695770    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.695772    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.695775    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.695777    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.695780    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.695988    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"895","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0610 19:48:01.757966    9989 request.go:629] Waited for 61.697059ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758041    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758048    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.758053    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.758057    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.759756    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.759766    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.759773    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.759779    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.759783    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.759788    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.759793    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.759806    9989 round_trippers.go:580]     Audit-Id: e8ae6de5-f7c9-4f36-881c-ed09a8012b60
	I0610 19:48:01.759959    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.760178    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760188    9989 pod_ready.go:81] duration metric: took 65.665915ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.760194    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760200    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.959909    9989 request.go:629] Waited for 199.659235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960065    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960075    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.960086    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.960093    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.962763    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.962778    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.962785    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.962789    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.962793    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.962819    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.962827    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.962832    9989 round_trippers.go:580]     Audit-Id: e27af578-4ca0-4cfe-8af3-b60f6b0fa9bd
	I0610 19:48:01.962941    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:02.158260    9989 request.go:629] Waited for 194.998097ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158342    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.158363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.158369    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.160892    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.160907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.160913    9989 round_trippers.go:580]     Audit-Id: 0bef1bb4-379d-409d-8e02-4dbc9a2811a4
	I0610 19:48:02.160918    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.160949    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.160957    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.160961    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.160968    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.161074    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"750","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:02.161324    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161336    9989 pod_ready.go:81] duration metric: took 401.144458ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.161344    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161351    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.358390    9989 request.go:629] Waited for 196.956176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358484    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358496    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.358508    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.358515    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.360992    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.361021    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.361031    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.361036    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.361039    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.361043    9989 round_trippers.go:580]     Audit-Id: 6f8be12b-1957-417b-8d1b-e678c7792dd3
	I0610 19:48:02.361046    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.361051    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.361202    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:02.557934    9989 request.go:629] Waited for 196.31847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.557999    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.558009    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.558037    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.558044    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.560427    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.560441    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.560448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.560454    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.560458    9989 round_trippers.go:580]     Audit-Id: 4c41615e-621c-4a97-9365-ac7c1773c395
	I0610 19:48:02.560461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.560465    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.560468    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.560523    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:02.560758    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:02.560768    9989 pod_ready.go:81] duration metric: took 399.425236ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.560777    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.757957    9989 request.go:629] Waited for 197.131938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758066    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758078    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.758089    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.758095    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.761202    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:02.761216    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.761223    9989 round_trippers.go:580]     Audit-Id: b73d177c-0cc8-4b3e-9eaa-58e1aca589bd
	I0610 19:48:02.761229    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.761233    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.761236    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.761240    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.761243    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.761619    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:02.958192    9989 request.go:629] Waited for 196.273854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958328    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.958357    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.958367    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.961275    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.961290    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.961297    9989 round_trippers.go:580]     Audit-Id: 55ebfcfe-9c2e-43ee-8757-62fb6711bcdf
	I0610 19:48:02.961302    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.961312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.961315    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.961320    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.961324    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.961498    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:02.961759    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961777    9989 pod_ready.go:81] duration metric: took 401.008697ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.961786    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961792    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:03.158219    9989 request.go:629] Waited for 196.363249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158365    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158377    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.158388    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.158394    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.160987    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.161000    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.161007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.161011    9989 round_trippers.go:580]     Audit-Id: 4b2e7508-8f47-4d7f-b4ea-f0310bd3d491
	I0610 19:48:03.161015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.161019    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.161023    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.161027    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.161126    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"897","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0610 19:48:03.359868    9989 request.go:629] Waited for 198.409302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.359998    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.360008    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.360020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.360027    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.362871    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.362892    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.362899    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.362904    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.362908    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.362916    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.362921    9989 round_trippers.go:580]     Audit-Id: ba3a2e04-447a-4800-872e-bbbc8698c7f3
	I0610 19:48:03.362931    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.363233    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:03.363483    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363503    9989 pod_ready.go:81] duration metric: took 401.718227ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:03.363511    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363517    9989 pod_ready.go:38] duration metric: took 1.687604899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:03.363529    9989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 19:48:03.375111    9989 command_runner.go:130] > -16
	I0610 19:48:03.375245    9989 ops.go:34] apiserver oom_adj: -16
	I0610 19:48:03.375257    9989 kubeadm.go:591] duration metric: took 7.76794986s to restartPrimaryControlPlane
	I0610 19:48:03.375262    9989 kubeadm.go:393] duration metric: took 7.787982406s to StartCluster
	I0610 19:48:03.375275    9989 settings.go:142] acquiring lock: {Name:mkfdfd0a396b1866366b70895e6d936c4f7de68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.375367    9989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:48:03.375765    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.376028    9989 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 19:48:03.400444    9989 out.go:177] * Verifying Kubernetes components...
	I0610 19:48:03.376041    9989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 19:48:03.376184    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:03.421565    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:03.463087    9989 out.go:177] * Enabled addons: 
	I0610 19:48:03.484252    9989 addons.go:510] duration metric: took 108.208716ms for enable addons: enabled=[]
	I0610 19:48:03.563649    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:48:03.576041    9989 node_ready.go:35] waiting up to 6m0s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:03.576103    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.576110    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.576116    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.576120    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.577625    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:03.577635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.577640    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.577644    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.577652    9989 round_trippers.go:580]     Audit-Id: 1a9b118d-1c1f-4a85-b573-ec6d65f2ea3e
	I0610 19:48:03.577656    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.577658    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.577661    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.577737    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.077472    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.077497    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.077513    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.077519    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.080273    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.080289    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.080298    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.080305    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.080311    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.080315    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.080320    9989 round_trippers.go:580]     Audit-Id: 1859e085-211f-4e27-92e7-f3b22958dff9
	I0610 19:48:04.080323    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.080687    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.577072    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.577095    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.577107    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.577115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.579474    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.579488    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.579496    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.579500    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.579505    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.579508    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.579511    9989 round_trippers.go:580]     Audit-Id: d35268d8-5a6a-4b80-9fc5-c56ab0f588fa
	I0610 19:48:04.579516    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.579860    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:05.077214    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.077238    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.077249    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.077255    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.079762    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.079777    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.079784    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.079788    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.079791    9989 round_trippers.go:580]     Audit-Id: 8db0d71b-506a-485d-b9c4-877536f220a0
	I0610 19:48:05.079795    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.079820    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.079828    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.079940    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.080178    9989 node_ready.go:49] node "multinode-353000" has status "Ready":"True"
	I0610 19:48:05.080194    9989 node_ready.go:38] duration metric: took 1.504185458s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:05.080202    9989 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:05.080250    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:05.080258    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.080265    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.080270    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.082809    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.082818    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.082823    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.082827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.082831    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.082834    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.082836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.082839    9989 round_trippers.go:580]     Audit-Id: ddb615f3-2587-4f9c-8d81-31db61bb1a6e
	I0610 19:48:05.083922    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0610 19:48:05.085829    9989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:05.085871    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.085875    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.085881    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.085896    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.086914    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.086929    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.086937    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.086941    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.086944    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.086947    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.086957    9989 round_trippers.go:580]     Audit-Id: b4ad06e6-d502-42ac-9675-7f15e25621df
	I0610 19:48:05.086961    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.087093    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.087343    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.087350    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.087355    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.087359    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.088202    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:05.088209    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.088215    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.088221    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.088226    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.088231    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.088236    9989 round_trippers.go:580]     Audit-Id: b6058267-b32d-4d28-9209-3e3c65514ada
	I0610 19:48:05.088239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.088425    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.586718    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.586742    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.586754    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.586759    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.589614    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.589627    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.589634    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.589639    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.589643    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.589648    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.589653    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.589657    9989 round_trippers.go:580]     Audit-Id: a2558bb6-21de-413e-adb7-2066705c0c39
	I0610 19:48:05.589740    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.590099    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.590114    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.590121    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.590127    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.591639    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.591647    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.591654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.591672    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.591679    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.591683    9989 round_trippers.go:580]     Audit-Id: 2de87cae-73ae-440c-a6d4-90fb3f51f475
	I0610 19:48:05.591688    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.591709    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.591808    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.086573    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.086600    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.086612    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.086618    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.089412    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.089427    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.089434    9989 round_trippers.go:580]     Audit-Id: f7e13af5-b1a6-43d3-bb98-5aad49fca036
	I0610 19:48:06.089438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.089441    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.089446    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.089450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.089589    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.089977    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.089987    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.089994    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.089998    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.091344    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.091353    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.091358    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.091361    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.091364    9989 round_trippers.go:580]     Audit-Id: 7a289ac0-a7eb-4e17-a539-34afa9d10e8f
	I0610 19:48:06.091367    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.091370    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.091372    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.091556    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.587106    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.587131    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.587143    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.587148    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.589792    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.589811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.589818    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.589822    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.589835    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.589840    9989 round_trippers.go:580]     Audit-Id: 1ec66f4a-3740-4406-bbd1-e5ca56116de6
	I0610 19:48:06.589843    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.589847    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.590009    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.590408    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.590419    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.590425    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.590431    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.591734    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.591742    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.591746    9989 round_trippers.go:580]     Audit-Id: 3ed956f5-c213-4c78-a89b-9a399e0d9f57
	I0610 19:48:06.591749    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.591752    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.591755    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.591758    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.591760    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.591853    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.086755    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.086817    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.086833    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.086840    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.089422    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.089436    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.089444    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.089448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.089456    9989 round_trippers.go:580]     Audit-Id: 3c2b2755-0928-4843-907f-76f6698cb531
	I0610 19:48:07.089461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.089464    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.089848    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.090239    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.090248    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.090257    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.090263    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.091435    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.091442    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.091447    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.091461    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.091466    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.091469    9989 round_trippers.go:580]     Audit-Id: a295b9b4-766e-4157-bafe-85b97af1b24f
	I0610 19:48:07.091473    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.091477    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.091632    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.091819    9989 pod_ready.go:102] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:07.586768    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.586789    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.586801    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.586811    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.589483    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.589501    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.589508    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.589513    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.589518    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.589523    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.589529    9989 round_trippers.go:580]     Audit-Id: 8f011804-7b53-46a0-8762-c6021b6b797c
	I0610 19:48:07.589533    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.589733    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.590139    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.590149    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.590157    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.590161    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.591411    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.591423    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.591431    9989 round_trippers.go:580]     Audit-Id: 32d80ac7-569b-4efe-b59c-6c43cc45cbb0
	I0610 19:48:07.591438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.591442    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.591450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.591455    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.591459    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.591711    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.085955    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.085978    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.085989    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.085995    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.088888    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:08.088905    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.088913    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.088917    9989 round_trippers.go:580]     Audit-Id: 6130cd3b-545c-4dab-bb4e-8509f6ca7583
	I0610 19:48:08.088921    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.088924    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.088929    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.088943    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.089331    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.089733    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.089743    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.089751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.089757    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.091163    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.091171    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.091176    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.091178    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.091181    9989 round_trippers.go:580]     Audit-Id: fb4feb18-1294-4799-b740-01b7c906b714
	I0610 19:48:08.091183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.091187    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.091191    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.091368    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.586116    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.586130    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.586136    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.586139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.588086    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.588098    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.588103    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.588106    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.588108    9989 round_trippers.go:580]     Audit-Id: 0ee2c29d-3bee-4ce6-b7f8-9c58b599b3c3
	I0610 19:48:08.588111    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.588114    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.588116    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.588226    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.588519    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.588525    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.588531    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.588534    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.593668    9989 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 19:48:08.593684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.593689    9989 round_trippers.go:580]     Audit-Id: ad0e5c68-e6f8-4266-8198-de1fd97d7f9b
	I0610 19:48:08.593692    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.593694    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.593696    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.593699    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.593702    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.593773    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.086588    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:09.086618    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.086658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.086666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.089146    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.089159    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.089199    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.089213    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.089220    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.089227    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.089232    9989 round_trippers.go:580]     Audit-Id: f98d64ed-8706-40c8-bca0-af200ff708e8
	I0610 19:48:09.089239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.089496    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0610 19:48:09.089821    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.089828    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.089834    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.089837    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.090901    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.090910    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.090914    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.090918    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.090922    9989 round_trippers.go:580]     Audit-Id: 684d3cb2-4de8-4213-801b-a1b1cdca1ae6
	I0610 19:48:09.090926    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.090929    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.090932    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.091098    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.091288    9989 pod_ready.go:92] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:09.091297    9989 pod_ready.go:81] duration metric: took 4.005597593s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091304    9989 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.091336    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.091342    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.091345    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.092345    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.092354    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.092359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.092364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.092368    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.092372    9989 round_trippers.go:580]     Audit-Id: 0ec593cf-ab0e-4393-b1d5-d458992d576c
	I0610 19:48:09.092378    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.092386    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.092510    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.092739    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.092746    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.092751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.092754    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.093693    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.093703    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.093710    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.093716    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.093720    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.093723    9989 round_trippers.go:580]     Audit-Id: cd7754ad-de2e-4337-95c9-5f8181bafe8a
	I0610 19:48:09.093726    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.093736    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.093852    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.591562    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.591592    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.591601    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.591606    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.593937    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.593942    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.593946    9989 round_trippers.go:580]     Audit-Id: a1e77184-60e5-45b7-991d-afda7283198c
	I0610 19:48:09.593949    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.593953    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.593955    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.593958    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.594184    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.594428    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.594435    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.594441    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.594444    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.595688    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.595698    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.595705    9989 round_trippers.go:580]     Audit-Id: b8c9b2c8-7992-42e7-9bf8-112b13ef8d15
	I0610 19:48:09.595711    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.595721    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.595729    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.595732    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.595734    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.595855    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.091896    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.091930    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.091948    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.091961    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.094812    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:10.094827    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.094833    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.094838    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.094842    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.094847    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.094850    9989 round_trippers.go:580]     Audit-Id: 36d914ed-5a76-4cfd-aea2-50d2467afc00
	I0610 19:48:10.094854    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.095220    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.095550    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.095559    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.095567    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.095572    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.097001    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.097008    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.097012    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.097016    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.097018    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.097021    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.097031    9989 round_trippers.go:580]     Audit-Id: 69d6521e-fa5d-4f41-a0e6-1742e53a772b
	I0610 19:48:10.097034    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.097219    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.592589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.592613    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.592625    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.592631    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.595848    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:10.595860    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.595867    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.595872    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.595876    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.595881    9989 round_trippers.go:580]     Audit-Id: 11308bab-1148-4a9a-9a2f-6d24ea1297c6
	I0610 19:48:10.595886    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.595890    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.595995    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.596332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.596342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.596350    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.596372    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.597763    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.597770    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.597776    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.597781    9989 round_trippers.go:580]     Audit-Id: 04f99b83-61e5-4bf2-8781-a0e87f56f205
	I0610 19:48:10.597786    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.597791    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.597794    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.597796    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.597950    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.092146    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.092175    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.092188    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.092244    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.094833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.094848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.094855    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.094859    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.094864    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.094869    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.094873    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.094877    9989 round_trippers.go:580]     Audit-Id: f1b5bd76-11e8-4009-a1d4-09ae141a7be4
	I0610 19:48:11.095063    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.095396    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.095405    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.095414    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.095420    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.096829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.096837    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.096842    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.096845    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.096848    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.096851    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.096855    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.096857    9989 round_trippers.go:580]     Audit-Id: 5edc3937-e4f9-4fc8-924f-f2f08684b9af
	I0610 19:48:11.097460    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.097661    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:11.592045    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.592069    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.592139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.592150    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.594256    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.594268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.594276    9989 round_trippers.go:580]     Audit-Id: 22199be0-8b40-4afe-8222-00876ce24849
	I0610 19:48:11.594280    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.594284    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.594289    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.594292    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.594295    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.594751    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.595057    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.595064    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.595069    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.595073    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.596263    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.596270    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.596275    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.596277    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.596280    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.596282    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.596285    9989 round_trippers.go:580]     Audit-Id: 1e950ce6-6a1d-4fb4-862e-369bdd1c1b97
	I0610 19:48:11.596287    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.596438    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.091946    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.092024    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.092038    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.092047    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.094382    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:12.094392    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.094398    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.094402    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.094410    9989 round_trippers.go:580]     Audit-Id: fae3296c-1bb4-48d8-bb8a-365ebcc14279
	I0610 19:48:12.094421    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.094424    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.094428    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.094726    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.095092    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.095102    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.095110    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.095115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.096329    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.096337    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.096342    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.096346    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.096350    9989 round_trippers.go:580]     Audit-Id: 3613c759-c38d-4132-b7db-3ebfd2715c11
	I0610 19:48:12.096352    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.096355    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.096357    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.096531    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.591302    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.591317    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.591323    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.591326    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.592512    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.592525    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.592532    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.592537    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.592541    9989 round_trippers.go:580]     Audit-Id: b9eb3c47-6f8d-4edb-a70c-efdabd5c9569
	I0610 19:48:12.592545    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.592550    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.592554    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.592679    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.592922    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.592929    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.592935    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.592939    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.594275    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.594281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.594287    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.594291    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.594299    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.594306    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.594315    9989 round_trippers.go:580]     Audit-Id: 3687110f-6d7b-4d3c-a20f-dbbdac34123e
	I0610 19:48:12.594320    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.594536    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.092944    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.092964    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.092975    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.092980    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.094898    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.094907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.094913    9989 round_trippers.go:580]     Audit-Id: 4746a862-34ed-4f9d-86e0-fe54a5c8b1f0
	I0610 19:48:13.094916    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.094920    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.094923    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.094926    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.094929    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.095280    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.095536    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.095548    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.095554    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.095559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.096553    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:13.096561    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.096567    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.096571    9989 round_trippers.go:580]     Audit-Id: 72e59267-4587-49b7-acec-8760fef789ba
	I0610 19:48:13.096574    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.096579    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.096583    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.096586    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.096715    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.591444    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.591547    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.591562    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.591569    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:13.593942    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.593954    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.593964    9989 round_trippers.go:580]     Audit-Id: 4cb26672-7251-47d6-9956-9bd290658ddd
	I0610 19:48:13.593972    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.593977    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.593982    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.593989    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.594310    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.594645    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.594658    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.594666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.594673    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.596261    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.596268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.596273    9989 round_trippers.go:580]     Audit-Id: 67e85776-8134-4d60-b04e-6745575e0722
	I0610 19:48:13.596276    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.596280    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.596282    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.596286    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.596288    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.596582    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.596755    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:14.091643    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.091719    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.091733    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.091741    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.094245    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.094280    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.094290    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.094312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.094319    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.094323    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.094329    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.094332    9989 round_trippers.go:580]     Audit-Id: 950f168e-9ccc-4272-accd-6013766a76ca
	I0610 19:48:14.094657    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.094995    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.095005    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.095012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.095015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.096236    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.096244    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.096250    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.096256    9989 round_trippers.go:580]     Audit-Id: eebd34cd-fcec-4d30-b2c0-a119875e2dbd
	I0610 19:48:14.096260    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.096265    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.096267    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.096270    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.096411    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:14.592108    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.592139    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.592184    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.592191    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.594672    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.594684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.594691    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.594694    9989 round_trippers.go:580]     Audit-Id: 8d594bf6-b784-4c8a-aec0-2be7690404dc
	I0610 19:48:14.594698    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.594701    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.594705    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.594709    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.595294    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.595634    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.595643    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.595658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.595665    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.596893    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.596900    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.596905    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.596917    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.596921    9989 round_trippers.go:580]     Audit-Id: 3dd28d6f-84f3-46df-9566-43f2d793ebd5
	I0610 19:48:14.596923    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.596927    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.596930    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.597086    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.091684    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.091716    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.091756    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.091765    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.094212    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.094225    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.094232    9989 round_trippers.go:580]     Audit-Id: e13d9f6e-c973-4ff8-873c-d7b8c4b8f56d
	I0610 19:48:15.094237    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.094242    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.094248    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.094252    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.094257    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.094341    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:15.094659    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.094668    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.094675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.094680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.096045    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.096057    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.096064    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.096085    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.096094    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.096100    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.096105    9989 round_trippers.go:580]     Audit-Id: 296d945a-df5f-46db-a534-d725c2470a49
	I0610 19:48:15.096109    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.096301    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.592832    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.592857    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.592866    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.592872    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.595717    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.595735    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.595746    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.595754    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.595772    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.595779    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.595786    9989 round_trippers.go:580]     Audit-Id: ae59896b-cf44-4f51-a715-f1122fd8af04
	I0610 19:48:15.595790    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.596233    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"958","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0610 19:48:15.596566    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.596576    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.596583    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.596597    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.597753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.597760    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.597765    9989 round_trippers.go:580]     Audit-Id: b0d6cb8a-03a6-44b3-a2ba-bbdc0b1bb2cd
	I0610 19:48:15.597769    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.597774    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.597778    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.597781    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.597783    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.597942    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.598119    9989 pod_ready.go:92] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.598127    9989 pod_ready.go:81] duration metric: took 6.507043423s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598142    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598180    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:15.598184    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.598190    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.598194    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.599330    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.599339    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.599344    9989 round_trippers.go:580]     Audit-Id: 9ee40abb-4038-4697-bf98-1a8c08e3e5e7
	I0610 19:48:15.599355    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.599369    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.599374    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.599378    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.599383    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.599946    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"954","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0610 19:48:15.600736    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.600744    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.600750    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.600755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.602146    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.602154    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.602161    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.602166    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.602170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.602172    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.602175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.602177    9989 round_trippers.go:580]     Audit-Id: c8e6ccc9-5c26-4e00-8c74-5394763932f0
	I0610 19:48:15.602374    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.602545    9989 pod_ready.go:92] pod "kube-apiserver-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.602554    9989 pod_ready.go:81] duration metric: took 4.406297ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602560    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:15.602593    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.602599    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.602603    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.603793    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.603799    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.603805    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.603809    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.603813    9989 round_trippers.go:580]     Audit-Id: 06801598-bd08-4f01-b582-51da8e9dc299
	I0610 19:48:15.603815    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.603817    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.603820    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.604059    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"956","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0610 19:48:15.604286    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.604293    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.604298    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.604303    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605338    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.605345    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.605350    9989 round_trippers.go:580]     Audit-Id: ef3b568d-cb90-461e-91e7-4aa6b5568300
	I0610 19:48:15.605353    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.605357    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.605360    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.605364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.605373    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.605538    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.605703    9989 pod_ready.go:92] pod "kube-controller-manager-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.605711    9989 pod_ready.go:81] duration metric: took 3.145898ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605717    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605744    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:15.605749    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.605755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605759    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.606810    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.606817    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.606822    9989 round_trippers.go:580]     Audit-Id: 9e88e041-c1ec-4328-a34c-7b5e2396785a
	I0610 19:48:15.606825    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.606827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.606830    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.606833    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.606836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.607062    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:15.607284    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:15.607291    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.607297    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.607301    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.608273    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.608281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.608288    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.608294    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.608298    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.608301    9989 round_trippers.go:580]     Audit-Id: 9b407b86-eb01-4135-9dfb-f26b1633b27a
	I0610 19:48:15.608303    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.608306    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.608468    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"949","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:15.608621    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608630    9989 pod_ready.go:81] duration metric: took 2.908037ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:15.608636    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608641    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.608665    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:15.608670    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.608675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.608680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.609749    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.609755    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.609759    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.609763    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.609766    9989 round_trippers.go:580]     Audit-Id: 9d2809bc-8920-4033-a980-81e0b514b51e
	I0610 19:48:15.609768    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.609771    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.609774    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.609923    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:15.610130    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:15.610137    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.610142    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.610147    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.611124    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.611131    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.611136    9989 round_trippers.go:580]     Audit-Id: b7f93f53-711a-4909-8dfa-b5358e3edf06
	I0610 19:48:15.611163    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.611167    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.611170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.611173    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.611175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.611312    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:15.611447    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.611454    9989 pod_ready.go:81] duration metric: took 2.808014ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.611459    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.794030    9989 request.go:629] Waited for 182.512666ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794147    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794157    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.794169    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.794177    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.796912    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.796926    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.796934    9989 round_trippers.go:580]     Audit-Id: 3854ac46-1b79-4426-8236-7591cc550ae2
	I0610 19:48:15.796938    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.796942    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.796946    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.796978    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.796983    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.797082    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:15.994033    9989 request.go:629] Waited for 196.636422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994102    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994108    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.994117    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.994122    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.995838    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.995848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.995853    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.995857    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.995860    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.995863    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.995866    9989 round_trippers.go:580]     Audit-Id: 038e8b7e-5833-4987-8dec-d70fd06fd8f3
	I0610 19:48:15.995869    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.996172    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.996363    9989 pod_ready.go:92] pod "kube-proxy-v7s4q" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.996371    9989 pod_ready.go:81] duration metric: took 384.920541ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.996378    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.194182    9989 request.go:629] Waited for 197.750366ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194292    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194302    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.194312    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.194320    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.196795    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.196809    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.196822    9989 round_trippers.go:580]     Audit-Id: 038d5bdb-1b7f-4b04-89c8-33d598c4b1d6
	I0610 19:48:16.196840    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.196849    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.196855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.196880    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.196889    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.197056    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"943","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0610 19:48:16.393212    9989 request.go:629] Waited for 195.873626ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393266    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393272    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.393278    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.393282    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.395123    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:16.395136    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.395141    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.395145    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.395150    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.395153    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.395155    9989 round_trippers.go:580]     Audit-Id: ab94a6ed-7607-433e-8303-56582026becf
	I0610 19:48:16.395158    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.395272    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:16.395463    9989 pod_ready.go:92] pod "kube-scheduler-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:16.395471    9989 pod_ready.go:81] duration metric: took 399.102366ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.395478    9989 pod_ready.go:38] duration metric: took 11.315661502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:16.395490    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:48:16.395535    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:48:16.407763    9989 command_runner.go:130] > 1536
	I0610 19:48:16.407838    9989 api_server.go:72] duration metric: took 13.032244276s to wait for apiserver process to appear ...
	I0610 19:48:16.407853    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:48:16.407872    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:16.410818    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:16.410851    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:16.410855    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.410861    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.410865    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.411473    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:16.411482    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.411486    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.411489    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:16.411493    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.411496    9989 round_trippers.go:580]     Audit-Id: 9e18606b-4bce-473d-8045-05f615ea3c0b
	I0610 19:48:16.411499    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.411502    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.411504    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.411534    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:16.411563    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:16.411571    9989 api_server.go:131] duration metric: took 3.713676ms to wait for apiserver health ...
	I0610 19:48:16.411576    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:16.593917    9989 request.go:629] Waited for 182.303257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593969    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593982    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.594020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.594030    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.598338    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.598347    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.598352    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.598356    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.598359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.598362    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.598366    9989 round_trippers.go:580]     Audit-Id: 739ff66b-4603-4a26-9ed9-1936484cf2df
	I0610 19:48:16.598369    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.598986    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:16.600809    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:16.600820    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:16.600824    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:16.600827    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:16.600829    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:16.600832    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:16.600835    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:16.600838    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:16.600841    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:16.600843    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:16.600846    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:16.600849    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:16.600851    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:16.600856    9989 system_pods.go:74] duration metric: took 189.281493ms to wait for pod list to return data ...
	I0610 19:48:16.600861    9989 default_sa.go:34] waiting for default service account to be created ...
	I0610 19:48:16.794887    9989 request.go:629] Waited for 193.957918ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794986    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794997    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.795009    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.795017    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.797833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.797849    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.797856    9989 round_trippers.go:580]     Content-Length: 261
	I0610 19:48:16.797860    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.797863    9989 round_trippers.go:580]     Audit-Id: a5fbe232-e1a9-4892-a78a-2013b453a7c8
	I0610 19:48:16.797870    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.797873    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.797878    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.797881    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.797896    9989 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"809c40cb-86f1-483d-98cc-1b46432644d5","resourceVersion":"323","creationTimestamp":"2024-06-11T02:40:31Z"}}]}
	I0610 19:48:16.798039    9989 default_sa.go:45] found service account: "default"
	I0610 19:48:16.798051    9989 default_sa.go:55] duration metric: took 197.191772ms for default service account to be created ...
	I0610 19:48:16.798058    9989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 19:48:16.994131    9989 request.go:629] Waited for 196.005872ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994194    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994203    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.994251    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.994262    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.998793    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.998811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.998819    9989 round_trippers.go:580]     Audit-Id: 3a3c6305-a6bc-4dd6-990c-e7f5db70738f
	I0610 19:48:16.998824    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.998829    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.998845    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.998850    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.998853    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.999210    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:17.001028    9989 system_pods.go:86] 12 kube-system pods found
	I0610 19:48:17.001039    9989 system_pods.go:89] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:17.001043    9989 system_pods.go:89] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:17.001047    9989 system_pods.go:89] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:17.001050    9989 system_pods.go:89] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:17.001054    9989 system_pods.go:89] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:17.001057    9989 system_pods.go:89] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:17.001062    9989 system_pods.go:89] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:17.001065    9989 system_pods.go:89] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:17.001069    9989 system_pods.go:89] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:17.001072    9989 system_pods.go:89] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:17.001076    9989 system_pods.go:89] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:17.001079    9989 system_pods.go:89] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:17.001084    9989 system_pods.go:126] duration metric: took 203.027203ms to wait for k8s-apps to be running ...
	I0610 19:48:17.001090    9989 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 19:48:17.001139    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:48:17.012670    9989 system_svc.go:56] duration metric: took 11.575591ms WaitForService to wait for kubelet
	I0610 19:48:17.012687    9989 kubeadm.go:576] duration metric: took 13.637116157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:48:17.012699    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:17.194231    9989 request.go:629] Waited for 181.491134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194340    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:17.194363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:17.194370    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:17.197119    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:17.197137    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:17.197149    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:17.197156    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:17.197162    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:17.197169    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:17.197176    9989 round_trippers.go:580]     Audit-Id: d3d91bd9-0b1c-4a20-9ebb-04b5962cdbc6
	I0610 19:48:17.197183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:17.197758    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15445 chars]
	I0610 19:48:17.198317    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198329    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198338    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198342    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198348    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198354    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198359    9989 node_conditions.go:105] duration metric: took 185.662539ms to run NodePressure ...
	I0610 19:48:17.198370    9989 start.go:240] waiting for startup goroutines ...
	I0610 19:48:17.198378    9989 start.go:245] waiting for cluster config update ...
	I0610 19:48:17.198401    9989 start.go:254] writing updated cluster config ...
	I0610 19:48:17.220816    9989 out.go:177] 
	I0610 19:48:17.242724    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:17.242860    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.265195    9989 out.go:177] * Starting "multinode-353000-m02" worker node in "multinode-353000" cluster
	I0610 19:48:17.307293    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:48:17.307327    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:48:17.307547    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:48:17.307565    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:48:17.307689    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.308695    9989 start.go:360] acquireMachinesLock for multinode-353000-m02: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:48:17.308814    9989 start.go:364] duration metric: took 94.629µs to acquireMachinesLock for "multinode-353000-m02"
	I0610 19:48:17.308843    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:48:17.308851    9989 fix.go:54] fixHost starting: m02
	I0610 19:48:17.309302    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:48:17.309340    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:48:17.318771    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53805
	I0610 19:48:17.319159    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:48:17.319519    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:48:17.319536    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:48:17.319731    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:48:17.319893    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.319997    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:48:17.320076    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.320165    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:48:17.321139    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.321165    9989 fix.go:112] recreateIfNeeded on multinode-353000-m02: state=Stopped err=<nil>
	I0610 19:48:17.321176    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	W0610 19:48:17.321267    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:48:17.342117    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000-m02" ...
	I0610 19:48:17.384293    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .Start
	I0610 19:48:17.384586    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.384618    9989 main.go:141] libmachine: (multinode-353000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid
	I0610 19:48:17.386481    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.386504    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | pid 9545 is in state "Stopped"
	I0610 19:48:17.386538    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid...
	I0610 19:48:17.386916    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Using UUID 3b15a703-00dc-45e7-88e9-620fa037ae16
	I0610 19:48:17.404856    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Generated MAC 9a:45:71:59:94:c9
	I0610 19:48:17.404885    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:48:17.405069    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405097    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405170    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3b15a703-00dc-45e7-88e9-620fa037ae16", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/j
enkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:48:17.405218    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3b15a703-00dc-45e7-88e9-620fa037ae16 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/mult
inode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:48:17.405234    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:48:17.406727    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Pid is 10028
	I0610 19:48:17.407115    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 0
	I0610 19:48:17.407129    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.407257    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 10028
	I0610 19:48:17.409351    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:48:17.409467    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:48:17.409488    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690bdc}
	I0610 19:48:17.409512    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:48:17.409523    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:48:17.409543    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found match: 9a:45:71:59:94:c9
	I0610 19:48:17.409570    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | IP: 192.169.0.20
	I0610 19:48:17.409579    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:48:17.410301    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:17.410512    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.410985    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:48:17.410995    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.411096    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:17.411190    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:17.411313    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411449    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411555    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:17.411688    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:17.411842    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:17.411849    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:48:17.415070    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:48:17.423513    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:48:17.424462    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.424485    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.424494    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.424500    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.810455    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:48:17.810477    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:48:17.925056    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.925078    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.925090    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.925102    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.925970    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:48:17.925981    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:48:23.237466    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:48:23.237549    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:48:23.237560    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:48:23.261554    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:48:52.481015    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:48:52.481029    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481167    9989 buildroot.go:166] provisioning hostname "multinode-353000-m02"
	I0610 19:48:52.481180    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481288    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.481384    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.481465    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481540    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481624    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.481764    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.481913    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.481922    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000-m02 && echo "multinode-353000-m02" | sudo tee /etc/hostname
	I0610 19:48:52.555898    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m02
	
	I0610 19:48:52.555912    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.556047    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.556155    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556244    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556351    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.556487    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.556669    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.556682    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:48:52.627006    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:48:52.627024    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:48:52.627038    9989 buildroot.go:174] setting up certificates
	I0610 19:48:52.627044    9989 provision.go:84] configureAuth start
	I0610 19:48:52.627052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.627185    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:52.627290    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.627382    9989 provision.go:143] copyHostCerts
	I0610 19:48:52.627410    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627456    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:48:52.627462    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627594    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:48:52.627791    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627821    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:48:52.627825    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627924    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:48:52.628081    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628109    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:48:52.628113    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628206    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:48:52.628383    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m02 san=[127.0.0.1 192.169.0.20 localhost minikube multinode-353000-m02]
	I0610 19:48:52.864621    9989 provision.go:177] copyRemoteCerts
	I0610 19:48:52.864670    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:48:52.864684    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.864871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.865093    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.865223    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.865370    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:52.902301    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:48:52.902374    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:48:52.922200    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:48:52.922272    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 19:48:52.942419    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:48:52.942486    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:48:52.961961    9989 provision.go:87] duration metric: took 334.921541ms to configureAuth
	I0610 19:48:52.961973    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:48:52.962132    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:52.962145    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:52.962271    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.962375    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.962471    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962561    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962649    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.962765    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.962891    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.962899    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:48:53.026409    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:48:53.026421    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:48:53.026513    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:48:53.026532    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.026664    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.026757    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026854    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026936    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.027075    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.027217    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.027260    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.19"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:48:53.101854    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.19
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:48:53.101871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.102004    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.102084    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102159    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102254    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.102385    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.102564    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.102577    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:48:54.746316    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:48:54.746329    9989 machine.go:97] duration metric: took 37.336632265s to provisionDockerMachine
	I0610 19:48:54.746338    9989 start.go:293] postStartSetup for "multinode-353000-m02" (driver="hyperkit")
	I0610 19:48:54.746346    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:48:54.746364    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.746553    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:48:54.746573    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.746671    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.746768    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.746849    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.746924    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.784393    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:48:54.787362    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:48:54.787371    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:48:54.787375    9989 command_runner.go:130] > ID=buildroot
	I0610 19:48:54.787379    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:48:54.787385    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:48:54.787467    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:48:54.787474    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:48:54.787570    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:48:54.787737    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:48:54.787743    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:48:54.787933    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:48:54.795249    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:48:54.815317    9989 start.go:296] duration metric: took 68.971403ms for postStartSetup
	I0610 19:48:54.815337    9989 fix.go:56] duration metric: took 37.507788969s for fixHost
	I0610 19:48:54.815352    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.815497    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.815593    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815691    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815780    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.815896    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:54.816039    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:54.816046    9989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:48:54.878000    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718074135.243306878
	
	I0610 19:48:54.878010    9989 fix.go:216] guest clock: 1718074135.243306878
	I0610 19:48:54.878017    9989 fix.go:229] Guest: 2024-06-10 19:48:55.243306878 -0700 PDT Remote: 2024-06-10 19:48:54.815342 -0700 PDT m=+195.166531099 (delta=427.964878ms)
	I0610 19:48:54.878027    9989 fix.go:200] guest clock delta is within tolerance: 427.964878ms
	I0610 19:48:54.878031    9989 start.go:83] releasing machines lock for "multinode-353000-m02", held for 37.570510595s
	I0610 19:48:54.878052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.878188    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:54.899842    9989 out.go:177] * Found network options:
	I0610 19:48:54.920775    9989 out.go:177]   - NO_PROXY=192.169.0.19
	W0610 19:48:54.941666    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.941707    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942405    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942613    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942729    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:48:54.942761    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	W0610 19:48:54.942841    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.942952    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.942957    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:48:54.942979    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.943187    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943226    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.943428    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943489    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943627    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943669    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.943798    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.979160    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:48:54.979221    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:48:54.979276    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:48:55.024346    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:48:55.024519    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:48:55.024548    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:48:55.024558    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.024672    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.039727    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:48:55.039987    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:48:55.049027    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:48:55.058181    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.058230    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:48:55.067256    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.076291    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:48:55.085310    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.094333    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:48:55.103537    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:48:55.112676    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:48:55.121615    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:48:55.130814    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:48:55.139162    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:48:55.139338    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:48:55.147700    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.246020    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:48:55.266428    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.266504    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:48:55.279486    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:48:55.279959    9989 command_runner.go:130] > [Unit]
	I0610 19:48:55.279969    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:48:55.279974    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:48:55.279987    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:48:55.279992    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:48:55.279996    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:48:55.280000    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:48:55.280003    9989 command_runner.go:130] > [Service]
	I0610 19:48:55.280006    9989 command_runner.go:130] > Type=notify
	I0610 19:48:55.280014    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:48:55.280019    9989 command_runner.go:130] > Environment=NO_PROXY=192.169.0.19
	I0610 19:48:55.280025    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:48:55.280036    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:48:55.280044    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:48:55.280049    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:48:55.280056    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:48:55.280061    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:48:55.280067    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:48:55.280078    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:48:55.280085    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:48:55.280088    9989 command_runner.go:130] > ExecStart=
	I0610 19:48:55.280100    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:48:55.280104    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:48:55.280112    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:48:55.280118    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:48:55.280122    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:48:55.280124    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:48:55.280128    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:48:55.280136    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:48:55.280141    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:48:55.280145    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:48:55.280149    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:48:55.280154    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:48:55.280158    9989 command_runner.go:130] > Delegate=yes
	I0610 19:48:55.280163    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:48:55.280170    9989 command_runner.go:130] > KillMode=process
	I0610 19:48:55.280175    9989 command_runner.go:130] > [Install]
	I0610 19:48:55.280181    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:48:55.280416    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.297490    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:48:55.315143    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.326478    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.337749    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:48:55.355043    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.365212    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.380927    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:48:55.381306    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:48:55.384049    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:48:55.384254    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:48:55.391544    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:48:55.404989    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:48:55.503276    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:48:55.597218    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.597255    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:48:55.612389    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.702999    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:49:56.756006    9989 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0610 19:49:56.756023    9989 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0610 19:49:56.756031    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.055138149s)
	I0610 19:49:56.756087    9989 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 19:49:56.764935    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.764947    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	I0610 19:49:56.764956    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	I0610 19:49:56.764968    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0610 19:49:56.764978    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 19:49:56.764989    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 19:49:56.765000    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765011    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 19:49:56.765020    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765044    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765058    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765082    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765093    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765103    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 19:49:56.765113    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765122    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765131    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765146    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765155    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765181    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765190    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 19:49:56.765199    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 19:49:56.765208    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 19:49:56.765218    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	I0610 19:49:56.765227    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 19:49:56.765235    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 19:49:56.765246    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 19:49:56.765255    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 19:49:56.765264    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 19:49:56.765273    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 19:49:56.765282    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765291    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765300    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 19:49:56.765308    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 19:49:56.765318    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765327    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765336    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765345    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765356    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765365    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765391    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765402    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765411    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765420    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765435    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765443    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765452    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765460    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765469    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765478    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765487    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765497    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765505    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765514    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765523    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765532    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 19:49:56.765540    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765549    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765558    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765568    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765580    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765838    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765857    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 19:49:56.765870    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765878    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 19:49:56.765888    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	I0610 19:49:56.765896    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 19:49:56.765905    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 19:49:56.765913    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 19:49:56.765921    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	I0610 19:49:56.765929    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 19:49:56.765936    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	I0610 19:49:56.765949    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 19:49:56.765956    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	I0610 19:49:56.765966    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 19:49:56.765973    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	I0610 19:49:56.765980    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 19:49:56.765987    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	I0610 19:49:56.765993    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	I0610 19:49:56.765998    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0610 19:49:56.766006    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	I0610 19:49:56.766015    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 19:49:56.766026    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	I0610 19:49:56.766038    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 19:49:56.766047    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 19:49:56.766063    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0610 19:49:56.766074    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0610 19:49:56.766107    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.766115    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	I0610 19:49:56.766124    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0610 19:49:56.766133    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 19:49:56.766140    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0610 19:49:56.766146    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0610 19:49:56.790586    9989 out.go:177] 
	W0610 19:49:56.812421    9989 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 19:49:56.812533    9989 out.go:239] * 
	W0610 19:49:56.813811    9989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 19:49:56.877394    9989 out.go:177] 
	
	
	==> Docker <==
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862187389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862199605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862786603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.960767168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.960985026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.961000004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.965728902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 cri-dockerd[1001]: time="2024-06-11T02:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bbe0ba4f26fa092aabac2dd15236185366045b7fe696deb8ca62e57cf21bba64/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 11 02:48:09 multinode-353000 cri-dockerd[1001]: time="2024-06-11T02:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e6d5e599ec17df742f5e6d8e8e063567cfce9334498434e4e9a9f94d2f0385da/resolv.conf as [nameserver 192.169.0.1]"
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129737312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129798261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129895010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.130045927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194027743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194077210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194088239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194261585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:31 multinode-353000 dockerd[781]: time="2024-06-11T02:48:31.767548453Z" level=info msg="ignoring event" container=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767817666Z" level=info msg="shim disconnected" id=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 namespace=moby
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767906619Z" level=warning msg="cleaning up after shim disconnected" id=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 namespace=moby
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767915567Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134344966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134410534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134420430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134480564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94827c43a9544       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   54b822818f491       storage-provisioner
	ccaa57ed742d0       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   e6d5e599ec17d       coredns-7db6d8ff4d-x984g
	a25c025ba395f       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   bbe0ba4f26fa0       busybox-fc5497c4f-4hdtl
	8adfed7dcc38a       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   65e9fb4a8551e       kindnet-j4h99
	26a1110268f56       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   31db7788c52d7       kube-proxy-v7s4q
	310a2ba1f3005       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   54b822818f491       storage-provisioner
	67aae91d2285d       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   128719801fb28       etcd-multinode-353000
	5d4dc7f0171a8       a52dc94f0a912                                                                                         2 minutes ago        Running             kube-scheduler            1                   3bef980dc628a       kube-scheduler-multinode-353000
	18988fa5e4f48       91be940803172                                                                                         2 minutes ago        Running             kube-apiserver            1                   faa88b411f410       kube-apiserver-multinode-353000
	f7b4550455000       25a1387cdab82                                                                                         2 minutes ago        Running             kube-controller-manager   1                   1255cdadd4b54       kube-controller-manager-multinode-353000
	8c6ad13b3a78e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago        Exited              busybox                   0                   55c2b427ef24f       busybox-fc5497c4f-4hdtl
	deba067632e3e       cbb01a7bd410d                                                                                         9 minutes ago        Exited              coredns                   0                   5cbb1f2848836       coredns-7db6d8ff4d-x984g
	f854aa2e2bd31       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              9 minutes ago        Exited              kindnet-cni               0                   5e434eeac16fa       kindnet-j4h99
	1b251ec109bf4       747097150317f                                                                                         9 minutes ago        Exited              kube-proxy                0                   75aef0f938fa2       kube-proxy-v7s4q
	496239ba94592       3861cfcd7c04c                                                                                         9 minutes ago        Exited              etcd                      0                   4479d5328ed80       etcd-multinode-353000
	4f9c6abaf085e       a52dc94f0a912                                                                                         9 minutes ago        Exited              kube-scheduler            0                   2627ea28857a0       kube-scheduler-multinode-353000
	e847ea1ccea34       91be940803172                                                                                         9 minutes ago        Exited              kube-apiserver            0                   4a744abd670d4       kube-apiserver-multinode-353000
	254a0e0afe628       25a1387cdab82                                                                                         9 minutes ago        Exited              kube-controller-manager   0                   0e7e3b74d4e98       kube-controller-manager-multinode-353000
	
	
	==> coredns [ccaa57ed742d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54720 - 29707 "HINFO IN 3370124570245195731.7845949665974998901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010017697s
	
	
	==> coredns [deba067632e3] <==
	[INFO] 10.244.1.2:54969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000067018s
	[INFO] 10.244.1.2:38029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071562s
	[INFO] 10.244.1.2:34326 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056229s
	[INFO] 10.244.1.2:53072 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077454s
	[INFO] 10.244.1.2:42751 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106879s
	[INFO] 10.244.1.2:35314 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070499s
	[INFO] 10.244.1.2:47905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037641s
	[INFO] 10.244.0.3:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080705s
	[INFO] 10.244.0.3:57627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107863s
	[INFO] 10.244.0.3:35475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031072s
	[INFO] 10.244.0.3:43687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098542s
	[INFO] 10.244.1.2:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087221s
	[INFO] 10.244.1.2:53832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099684s
	[INFO] 10.244.1.2:48880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068665s
	[INFO] 10.244.1.2:45968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057536s
	[INFO] 10.244.0.3:58843 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096021s
	[INFO] 10.244.0.3:32849 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001271s
	[INFO] 10.244.0.3:48661 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121766s
	[INFO] 10.244.0.3:42982 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000079089s
	[INFO] 10.244.1.2:53588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095171s
	[INFO] 10.244.1.2:51363 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006577s
	[INFO] 10.244.1.2:50446 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069941s
	[INFO] 10.244.1.2:58279 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000137813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-353000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T19_40_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:49:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.19
	  Hostname:    multinode-353000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9b8a9458f2642adaf019d9b4b838fc8
	  System UUID:                f0e94315-0000-0000-ac08-1f17bf5837e0
	  Boot ID:                    6aadb9aa-f53f-46f8-8739-49ca8a404678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hdtl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 coredns-7db6d8ff4d-x984g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m26s
	  kube-system                 etcd-multinode-353000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m42s
	  kube-system                 kindnet-j4h99                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m27s
	  kube-system                 kube-apiserver-multinode-353000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-controller-manager-multinode-353000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 kube-proxy-v7s4q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m27s
	  kube-system                 kube-scheduler-multinode-353000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 9m25s                kube-proxy       
	  Normal  Starting                 116s                 kube-proxy       
	  Normal  NodeHasSufficientPID     9m42s                kubelet          Node multinode-353000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m42s                kubelet          Node multinode-353000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s                kubelet          Node multinode-353000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m42s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m27s                node-controller  Node multinode-353000 event: Registered Node multinode-353000 in Controller
	  Normal  NodeReady                9m18s                kubelet          Node multinode-353000 status is now: NodeReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node multinode-353000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node multinode-353000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node multinode-353000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node multinode-353000 event: Registered Node multinode-353000 in Controller
	
	
	Name:               multinode-353000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T19_41_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:41:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:45:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.20
	  Hostname:    multinode-353000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 32bb2f108a254471a31dc67f28f9d3d4
	  System UUID:                3b1545e7-0000-0000-88e9-620fa037ae16
	  Boot ID:                    38bf82fb-0b80-495c-b710-667d6f0da6a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fznn5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kindnet-mcx2t              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m53s
	  kube-system                 kube-proxy-nz5rp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m54s (x2 over 8m54s)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s (x2 over 8m54s)  kubelet          Node multinode-353000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s (x2 over 8m54s)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m52s                  node-controller  Node multinode-353000-m02 event: Registered Node multinode-353000-m02 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-353000-m02 status is now: NodeReady
	  Normal  RegisteredNode           106s                   node-controller  Node multinode-353000-m02 event: Registered Node multinode-353000-m02 in Controller
	  Normal  NodeNotReady             66s                    node-controller  Node multinode-353000-m02 status is now: NodeNotReady
	
	
	Name:               multinode-353000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T19_42_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:42:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:43:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 11 Jun 2024 02:43:01 +0000   Tue, 11 Jun 2024 02:43:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.21
	  Hostname:    multinode-353000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0b2b5ca283d4d038600d206ae5a6972
	  System UUID:                9ed34225-0000-0000-87bc-ec0cd1dc4108
	  Boot ID:                    640ea9bf-6aae-4a1d-b22c-e4c9acf51e74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mqj8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m39s
	  kube-system                 kube-proxy-f6tzv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m39s (x2 over 7m39s)  kubelet          Node multinode-353000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x2 over 7m39s)  kubelet          Node multinode-353000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x2 over 7m39s)  kubelet          Node multinode-353000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m37s                  node-controller  Node multinode-353000-m03 event: Registered Node multinode-353000-m03 in Controller
	  Normal  NodeReady                6m57s                  kubelet          Node multinode-353000-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m7s                   node-controller  Node multinode-353000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           106s                   node-controller  Node multinode-353000-m03 event: Registered Node multinode-353000-m03 in Controller
	
	
	==> dmesg <==
	[  +5.341226] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.633037] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.245165] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.913210] systemd-fstab-generator[463]: Ignoring "noauto" option for root device
	[  +0.098315] systemd-fstab-generator[475]: Ignoring "noauto" option for root device
	[  +1.803072] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +0.064012] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.202234] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +0.110131] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.124925] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[Jun11 02:47] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.052268] kauditd_printk_skb: 117 callbacks suppressed
	[  +0.053153] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[  +0.098575] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.132187] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.403867] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +1.307475] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[Jun11 02:48] kauditd_printk_skb: 172 callbacks suppressed
	[  +2.395735] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +5.040891] kauditd_printk_skb: 70 callbacks suppressed
	[ +22.865342] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [496239ba9459] <==
	{"level":"info","ts":"2024-06-11T02:40:13.416849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became candidate at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.41688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgVoteResp from 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became leader at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 166c32860e8fd508 elected leader 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.420105Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"166c32860e8fd508","local-member-attributes":"{Name:multinode-353000 ClientURLs:[https://192.169.0.19:2379]}","request-path":"/0/members/166c32860e8fd508/attributes","cluster-id":"f10222c540877db9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-11T02:40:13.420141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.420334Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.420479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.422269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.19:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.42366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.426545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.426575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.443729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443804Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443841Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:45:32.030377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-11T02:45:32.030416Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-353000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"]}
	{"level":"warn","ts":"2024-06-11T02:45:32.030463Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.030528Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.057343Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.057367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.19:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-11T02:45:32.057436Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"166c32860e8fd508","current-leader-member-id":"166c32860e8fd508"}
	{"level":"info","ts":"2024-06-11T02:45:32.058299Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:45:32.058389Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:45:32.058397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-353000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"]}
	
	
	==> etcd [67aae91d2285] <==
	{"level":"info","ts":"2024-06-11T02:47:58.075051Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:47:58.075114Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:47:58.080222Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"166c32860e8fd508","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-11T02:47:58.080507Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.081545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.081606Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.082237Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-11T02:47:58.082665Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"166c32860e8fd508","initial-advertise-peer-urls":["https://192.169.0.19:2380"],"listen-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-11T02:47:58.083061Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-11T02:47:58.083578Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:47:58.083777Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:47:58.539957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.540002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.540209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgPreVoteResp from 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.54026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became candidate at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgVoteResp from 166c32860e8fd508 at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became leader at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 166c32860e8fd508 elected leader 166c32860e8fd508 at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.545874Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"166c32860e8fd508","local-member-attributes":"{Name:multinode-353000 ClientURLs:[https://192.169.0.19:2379]}","request-path":"/0/members/166c32860e8fd508/attributes","cluster-id":"f10222c540877db9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-11T02:47:58.546009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:47:58.545972Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:47:58.547719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-11T02:47:58.550104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-11T02:47:58.551389Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-11T02:47:58.553594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.19:2379"}
	
	
	==> kernel <==
	 02:49:59 up 4 min,  0 users,  load average: 0.13, 0.08, 0.02
	Linux multinode-353000 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8adfed7dcc38] <==
	I0611 02:49:12.958046       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:49:22.967295       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:49:22.967428       1 main.go:227] handling current node
	I0611 02:49:22.967513       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:49:22.967559       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:49:22.967935       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:49:22.968010       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:49:32.972971       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:49:32.973127       1 main.go:227] handling current node
	I0611 02:49:32.973250       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:49:32.973353       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:49:32.973678       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:49:32.973801       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:49:42.978094       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:49:42.978128       1 main.go:227] handling current node
	I0611 02:49:42.978137       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:49:42.978141       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:49:42.978315       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:49:42.978381       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:49:52.990680       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:49:52.990945       1 main.go:227] handling current node
	I0611 02:49:52.991054       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:49:52.991136       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:49:52.991265       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:49:52.991377       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f854aa2e2bd3] <==
	I0611 02:44:46.374755       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:44:56.379765       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:44:56.379800       1 main.go:227] handling current node
	I0611 02:44:56.379809       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:44:56.379813       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:44:56.380004       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:44:56.380081       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:06.387267       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:06.387415       1 main.go:227] handling current node
	I0611 02:45:06.387438       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:06.387530       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:06.387707       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:06.387767       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:16.398174       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:16.398207       1 main.go:227] handling current node
	I0611 02:45:16.398215       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:16.398219       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:16.398282       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:16.398306       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:26.402961       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:26.403014       1 main.go:227] handling current node
	I0611 02:45:26.403023       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:26.403028       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:26.403145       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:26.403174       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [18988fa5e4f4] <==
	I0611 02:47:59.908944       1 shared_informer.go:320] Caches are synced for configmaps
	I0611 02:47:59.909256       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0611 02:47:59.909519       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0611 02:47:59.909555       1 aggregator.go:165] initial CRD sync complete...
	I0611 02:47:59.909561       1 autoregister_controller.go:141] Starting autoregister controller
	I0611 02:47:59.909564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0611 02:47:59.909568       1 cache.go:39] Caches are synced for autoregister controller
	I0611 02:47:59.912589       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0611 02:47:59.915817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0611 02:47:59.916043       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0611 02:47:59.916367       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0611 02:47:59.916508       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0611 02:47:59.963218       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0611 02:47:59.963277       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0611 02:47:59.963852       1 policy_source.go:224] refreshing policies
	I0611 02:47:59.980820       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0611 02:48:00.814645       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0611 02:48:01.025076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.19]
	I0611 02:48:01.026199       1 controller.go:615] quota admission added evaluator for: endpoints
	I0611 02:48:01.031513       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0611 02:48:01.761603       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0611 02:48:01.928471       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0611 02:48:01.947406       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0611 02:48:01.991226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0611 02:48:01.997090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e847ea1ccea3] <==
	W0611 02:45:33.054541       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054753       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054897       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054965       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054485       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053684       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053702       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055039       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053718       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054788       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055162       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055246       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055342       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055398       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055476       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055630       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055255       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055686       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054764       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053658       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055278       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055325       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055866       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055938       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.056162       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [254a0e0afe62] <==
	I0611 02:40:32.758858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.352606ms"
	I0611 02:40:32.759042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.362µs"
	I0611 02:40:40.910014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.455µs"
	I0611 02:40:40.919760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.148µs"
	I0611 02:40:41.128812       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0611 02:40:42.122795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.582µs"
	I0611 02:40:42.147670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.018989ms"
	I0611 02:40:42.147737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.798µs"
	I0611 02:41:05.726747       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m02\" does not exist"
	I0611 02:41:05.736926       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m02" podCIDRs=["10.244.1.0/24"]
	I0611 02:41:06.133872       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m02"
	I0611 02:41:48.707406       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:41:50.827299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.246398ms"
	I0611 02:41:50.836431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.08559ms"
	I0611 02:41:50.836953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.263µs"
	I0611 02:41:53.908886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.755154ms"
	I0611 02:41:53.909672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.964µs"
	I0611 02:41:54.537772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.288076ms"
	I0611 02:41:54.537833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.558µs"
	I0611 02:42:19.344515       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m03\" does not exist"
	I0611 02:42:19.344568       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:42:19.349890       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m03" podCIDRs=["10.244.2.0/24"]
	I0611 02:42:21.151832       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m03"
	I0611 02:43:01.974195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:43:51.177548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	
	
	==> kube-controller-manager [f7b455045500] <==
	I0611 02:48:12.863445       1 shared_informer.go:320] Caches are synced for persistent volume
	I0611 02:48:12.863718       1 shared_informer.go:320] Caches are synced for attach detach
	I0611 02:48:12.863935       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0611 02:48:12.863727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0611 02:48:12.863732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0611 02:48:12.863741       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0611 02:48:12.868674       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0611 02:48:12.870923       1 shared_informer.go:320] Caches are synced for daemon sets
	I0611 02:48:12.872724       1 shared_informer.go:320] Caches are synced for cronjob
	I0611 02:48:12.890364       1 shared_informer.go:320] Caches are synced for job
	I0611 02:48:12.918816       1 shared_informer.go:320] Caches are synced for disruption
	I0611 02:48:12.922504       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0611 02:48:12.992005       1 shared_informer.go:320] Caches are synced for deployment
	I0611 02:48:13.002177       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0611 02:48:13.002383       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.616µs"
	I0611 02:48:13.002398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.753µs"
	I0611 02:48:13.009936       1 shared_informer.go:320] Caches are synced for resource quota
	I0611 02:48:13.014332       1 shared_informer.go:320] Caches are synced for crt configmap
	I0611 02:48:13.059369       1 shared_informer.go:320] Caches are synced for resource quota
	I0611 02:48:13.074262       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0611 02:48:13.484894       1 shared_informer.go:320] Caches are synced for garbage collector
	I0611 02:48:13.489351       1 shared_informer.go:320] Caches are synced for garbage collector
	I0611 02:48:13.489486       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0611 02:48:52.871429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.954301ms"
	I0611 02:48:52.871670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.1µs"
	
	
	==> kube-proxy [1b251ec109bf] <==
	I0611 02:40:32.780056       1 server_linux.go:69] "Using iptables proxy"
	I0611 02:40:32.794486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.19"]
	I0611 02:40:32.857420       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0611 02:40:32.857441       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0611 02:40:32.857452       1 server_linux.go:165] "Using iptables Proxier"
	I0611 02:40:32.859777       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0611 02:40:32.859889       1 server.go:872] "Version info" version="v1.30.1"
	I0611 02:40:32.859898       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:40:32.861522       1 config.go:192] "Starting service config controller"
	I0611 02:40:32.861557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0611 02:40:32.861607       1 config.go:101] "Starting endpoint slice config controller"
	I0611 02:40:32.861612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0611 02:40:32.862416       1 config.go:319] "Starting node config controller"
	I0611 02:40:32.862445       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0611 02:40:32.962479       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0611 02:40:32.962565       1 shared_informer.go:320] Caches are synced for service config
	I0611 02:40:32.969480       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [26a1110268f5] <==
	I0611 02:48:02.001653       1 server_linux.go:69] "Using iptables proxy"
	I0611 02:48:02.013979       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.19"]
	I0611 02:48:02.057499       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0611 02:48:02.057540       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0611 02:48:02.057555       1 server_linux.go:165] "Using iptables Proxier"
	I0611 02:48:02.059982       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0611 02:48:02.060269       1 server.go:872] "Version info" version="v1.30.1"
	I0611 02:48:02.060300       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:48:02.061760       1 config.go:192] "Starting service config controller"
	I0611 02:48:02.061875       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0611 02:48:02.061927       1 config.go:101] "Starting endpoint slice config controller"
	I0611 02:48:02.061950       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0611 02:48:02.062636       1 config.go:319] "Starting node config controller"
	I0611 02:48:02.062663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0611 02:48:02.162369       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0611 02:48:02.162444       1 shared_informer.go:320] Caches are synced for service config
	I0611 02:48:02.162680       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4f9c6abaf085] <==
	E0611 02:40:14.372584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:14.372745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:14.372819       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.182489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:15.182664       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.203927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0611 02:40:15.203983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0611 02:40:15.281257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0611 02:40:15.281362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:15.290251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0611 02:40:15.290425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0611 02:40:15.336462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.336589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.431159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0611 02:40:15.431203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0611 02:40:15.442927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.442968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.494146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.494219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.551457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.551500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0611 02:40:17.163038       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0611 02:45:32.082918       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0611 02:45:32.083248       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0611 02:45:32.083296       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5d4dc7f0171a] <==
	I0611 02:47:58.678119       1 serving.go:380] Generated self-signed cert in-memory
	W0611 02:47:59.868071       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0611 02:47:59.868111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:47:59.868235       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0611 02:47:59.868322       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0611 02:47:59.892253       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0611 02:47:59.892287       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:47:59.893518       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0611 02:47:59.893582       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0611 02:47:59.893744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0611 02:47:59.893978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0611 02:47:59.994411       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 11 02:48:02 multinode-353000 kubelet[1237]: E0611 02:48:02.699795    1237 projected.go:200] Error preparing data for projected volume kube-api-access-wc6pz for pod default/busybox-fc5497c4f-4hdtl: object "default"/"kube-root-ca.crt" not registered
	Jun 11 02:48:02 multinode-353000 kubelet[1237]: E0611 02:48:02.699917    1237 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c820421-de3f-4771-b4c1-aac0ed316723-kube-api-access-wc6pz podName:3c820421-de3f-4771-b4c1-aac0ed316723 nodeName:}" failed. No retries permitted until 2024-06-11 02:48:04.699898748 +0000 UTC m=+7.777719836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wc6pz" (UniqueName: "kubernetes.io/projected/3c820421-de3f-4771-b4c1-aac0ed316723-kube-api-access-wc6pz") pod "busybox-fc5497c4f-4hdtl" (UID: "3c820421-de3f-4771-b4c1-aac0ed316723") : object "default"/"kube-root-ca.crt" not registered
	Jun 11 02:48:03 multinode-353000 kubelet[1237]: E0611 02:48:03.084770    1237 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x984g" podUID="b2354103-bb58-4679-869f-a2ada1414513"
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.086327    1237 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-4hdtl" podUID="3c820421-de3f-4771-b4c1-aac0ed316723"
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.615324    1237 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.615530    1237 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b2354103-bb58-4679-869f-a2ada1414513-config-volume podName:b2354103-bb58-4679-869f-a2ada1414513 nodeName:}" failed. No retries permitted until 2024-06-11 02:48:08.615494672 +0000 UTC m=+11.693315760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b2354103-bb58-4679-869f-a2ada1414513-config-volume") pod "coredns-7db6d8ff4d-x984g" (UID: "b2354103-bb58-4679-869f-a2ada1414513") : object "kube-system"/"coredns" not registered
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.715877    1237 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.715988    1237 projected.go:200] Error preparing data for projected volume kube-api-access-wc6pz for pod default/busybox-fc5497c4f-4hdtl: object "default"/"kube-root-ca.crt" not registered
	Jun 11 02:48:04 multinode-353000 kubelet[1237]: E0611 02:48:04.716063    1237 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c820421-de3f-4771-b4c1-aac0ed316723-kube-api-access-wc6pz podName:3c820421-de3f-4771-b4c1-aac0ed316723 nodeName:}" failed. No retries permitted until 2024-06-11 02:48:08.716050629 +0000 UTC m=+11.793871712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wc6pz" (UniqueName: "kubernetes.io/projected/3c820421-de3f-4771-b4c1-aac0ed316723-kube-api-access-wc6pz") pod "busybox-fc5497c4f-4hdtl" (UID: "3c820421-de3f-4771-b4c1-aac0ed316723") : object "default"/"kube-root-ca.crt" not registered
	Jun 11 02:48:05 multinode-353000 kubelet[1237]: E0611 02:48:05.085374    1237 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x984g" podUID="b2354103-bb58-4679-869f-a2ada1414513"
	Jun 11 02:48:05 multinode-353000 kubelet[1237]: I0611 02:48:05.272706    1237 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: I0611 02:48:32.544417    1237 scope.go:117] "RemoveContainer" containerID="130521568c691ad88511924448b027ea5017bb130505a8d01871828a60561d29"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: I0611 02:48:32.544879    1237 scope.go:117] "RemoveContainer" containerID="310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: E0611 02:48:32.545019    1237 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(95aa7c05-392e-49d4-8604-12400011c22b)\"" pod="kube-system/storage-provisioner" podUID="95aa7c05-392e-49d4-8604-12400011c22b"
	Jun 11 02:48:47 multinode-353000 kubelet[1237]: I0611 02:48:47.085051    1237 scope.go:117] "RemoveContainer" containerID="310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2"
	Jun 11 02:48:57 multinode-353000 kubelet[1237]: E0611 02:48:57.099062    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:49:57 multinode-353000 kubelet[1237]: E0611 02:49:57.094570    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-353000 -n multinode-353000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-353000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (285.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (154.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 node delete m03
E0610 19:50:36.264863    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 node delete m03: (2m30.679349041s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr: exit status 2 (245.014533ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:52:31.180292   10106 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:52:31.180579   10106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:52:31.180585   10106 out.go:304] Setting ErrFile to fd 2...
	I0610 19:52:31.180588   10106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:52:31.180768   10106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:52:31.180967   10106 out.go:298] Setting JSON to false
	I0610 19:52:31.180989   10106 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:52:31.181029   10106 notify.go:220] Checking for updates...
	I0610 19:52:31.181295   10106 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:52:31.181313   10106 status.go:255] checking status of multinode-353000 ...
	I0610 19:52:31.181749   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.181802   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.190690   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53871
	I0610 19:52:31.191037   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.191461   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.191471   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.191704   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.191825   10106 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:52:31.191911   10106 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:52:31.191985   10106 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 10002
	I0610 19:52:31.192998   10106 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:52:31.193016   10106 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:52:31.193266   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.193291   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.201641   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53874
	I0610 19:52:31.201962   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.202336   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.202357   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.202616   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.202743   10106 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:52:31.202827   10106 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:52:31.203088   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.203112   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.211552   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53876
	I0610 19:52:31.211878   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.212218   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.212237   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.212417   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.212499   10106 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:52:31.212727   10106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:52:31.212748   10106 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:52:31.212831   10106 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:52:31.212912   10106 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:52:31.213015   10106 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:52:31.213100   10106 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:52:31.246794   10106 ssh_runner.go:195] Run: systemctl --version
	I0610 19:52:31.251171   10106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:52:31.263000   10106 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:52:31.263028   10106 api_server.go:166] Checking apiserver status ...
	I0610 19:52:31.263067   10106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:52:31.274011   10106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1536/cgroup
	W0610 19:52:31.281716   10106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1536/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:52:31.281775   10106 ssh_runner.go:195] Run: ls
	I0610 19:52:31.284997   10106 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:52:31.288664   10106 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:52:31.288676   10106 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:52:31.288684   10106 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:52:31.288703   10106 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:52:31.288974   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.289003   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.297685   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53880
	I0610 19:52:31.298020   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.298386   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.298404   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.298608   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.298730   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:52:31.298823   10106 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:52:31.298899   10106 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 10028
	I0610 19:52:31.299911   10106 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:52:31.299923   10106 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:52:31.300188   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.300212   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.308596   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53882
	I0610 19:52:31.308919   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.309243   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.309255   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.309477   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.309586   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:52:31.309667   10106 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:52:31.309920   10106 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:31.309950   10106 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:31.318492   10106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53884
	I0610 19:52:31.318804   10106 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:31.319152   10106 main.go:141] libmachine: Using API Version  1
	I0610 19:52:31.319171   10106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:31.319371   10106 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:31.319476   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:52:31.319618   10106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:52:31.319630   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:52:31.319708   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:52:31.319787   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:52:31.319862   10106 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:52:31.319936   10106 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:52:31.355810   10106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:52:31.366460   10106 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-353000 -n multinode-353000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 logs -n 25: (2.723524864s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m03 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp testdata/cp-test.txt                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000:/home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000 sudo cat                                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt                                                          | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m02:/home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n                                                                                                    | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | multinode-353000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-353000 ssh -n multinode-353000-m02 sudo cat                                                                      | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	|         | /home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-353000 node stop m03                                                                                             | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT | 10 Jun 24 19:43 PDT |
	| node    | multinode-353000 node start                                                                                                | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:43 PDT |                     |
	|         | m03 -v=7 --alsologtostderr                                                                                                 |                  |         |         |                     |                     |
	| node    | list -p multinode-353000                                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT |                     |
	| stop    | -p multinode-353000                                                                                                        | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT | 10 Jun 24 19:45 PDT |
	| start   | -p multinode-353000                                                                                                        | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:45 PDT |                     |
	|         | --wait=true -v=8                                                                                                           |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                          |                  |         |         |                     |                     |
	| node    | list -p multinode-353000                                                                                                   | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:49 PDT |                     |
	| node    | multinode-353000 node delete                                                                                               | multinode-353000 | jenkins | v1.33.1 | 10 Jun 24 19:50 PDT | 10 Jun 24 19:52 PDT |
	|         | m03                                                                                                                        |                  |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 19:45:39
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 19:45:39.692404    9989 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:45:39.692578    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692584    9989 out.go:304] Setting ErrFile to fd 2...
	I0610 19:45:39.692587    9989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:45:39.692759    9989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:45:39.694238    9989 out.go:298] Setting JSON to false
	I0610 19:45:39.716699    9989 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26095,"bootTime":1718047844,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:45:39.716794    9989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:45:39.738878    9989 out.go:177] * [multinode-353000] minikube v1.33.1 on Darwin 14.4.1
	I0610 19:45:39.781353    9989 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:45:39.781374    9989 notify.go:220] Checking for updates...
	I0610 19:45:39.824429    9989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:45:39.845512    9989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:45:39.866367    9989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:45:39.887316    9989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:45:39.908278    9989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:45:39.929733    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:39.929854    9989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:45:39.930309    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:39.930346    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:39.939199    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53775
	I0610 19:45:39.939566    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:39.939970    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:39.939978    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:39.940198    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:39.940315    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:39.969508    9989 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 19:45:40.011453    9989 start.go:297] selected driver: hyperkit
	I0610 19:45:40.011484    9989 start.go:901] validating driver "hyperkit" against &{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.011697    9989 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:45:40.011899    9989 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.012122    9989 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 19:45:40.022075    9989 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 19:45:40.025893    9989 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.025915    9989 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 19:45:40.028541    9989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:45:40.028616    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:45:40.028625    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:45:40.028709    9989 start.go:340] cluster config:
	{Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:45:40.028811    9989 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 19:45:40.071375    9989 out.go:177] * Starting "multinode-353000" primary control-plane node in "multinode-353000" cluster
	I0610 19:45:40.092477    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:45:40.092569    9989 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 19:45:40.092595    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:45:40.092792    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:45:40.092810    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:45:40.092980    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.093894    9989 start.go:360] acquireMachinesLock for multinode-353000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:45:40.094018    9989 start.go:364] duration metric: took 96.418µs to acquireMachinesLock for "multinode-353000"
	I0610 19:45:40.094053    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:45:40.094073    9989 fix.go:54] fixHost starting: 
	I0610 19:45:40.094498    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:45:40.094536    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:45:40.103465    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53777
	I0610 19:45:40.103833    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:45:40.104164    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:45:40.104180    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:45:40.104403    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:45:40.104528    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.104641    9989 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:45:40.104724    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.104851    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:45:40.105788    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.105820    9989 fix.go:112] recreateIfNeeded on multinode-353000: state=Stopped err=<nil>
	I0610 19:45:40.105834    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	W0610 19:45:40.105913    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:45:40.148276    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000" ...
	I0610 19:45:40.169332    9989 main.go:141] libmachine: (multinode-353000) Calling .Start
	I0610 19:45:40.169590    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.169632    9989 main.go:141] libmachine: (multinode-353000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid
	I0610 19:45:40.171495    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 9523 missing from process table
	I0610 19:45:40.171526    9989 main.go:141] libmachine: (multinode-353000) DBG | pid 9523 is in state "Stopped"
	I0610 19:45:40.171559    9989 main.go:141] libmachine: (multinode-353000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid...
	I0610 19:45:40.171882    9989 main.go:141] libmachine: (multinode-353000) DBG | Using UUID f0e955cd-5ea6-4315-ac08-1f17bf5837e0
	I0610 19:45:40.275926    9989 main.go:141] libmachine: (multinode-353000) DBG | Generated MAC 6e:10:a7:68:76:8c
	I0610 19:45:40.275947    9989 main.go:141] libmachine: (multinode-353000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:45:40.276073    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276103    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f0e955cd-5ea6-4315-ac08-1f17bf5837e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b1380)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0610 19:45:40.276164    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f0e955cd-5ea6-4315-ac08-1f17bf5837e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/1904
6-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:45:40.276203    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f0e955cd-5ea6-4315-ac08-1f17bf5837e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/multinode-353000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:45:40.276224    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:45:40.277704    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 DEBUG: hyperkit: Pid is 10002
	I0610 19:45:40.278259    9989 main.go:141] libmachine: (multinode-353000) DBG | Attempt 0
	I0610 19:45:40.278270    9989 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:45:40.278351    9989 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 10002
	I0610 19:45:40.279973    9989 main.go:141] libmachine: (multinode-353000) DBG | Searching for 6e:10:a7:68:76:8c in /var/db/dhcpd_leases ...
	I0610 19:45:40.280067    9989 main.go:141] libmachine: (multinode-353000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:45:40.280108    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:45:40.280134    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:45:40.280161    9989 main.go:141] libmachine: (multinode-353000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690a76}
	I0610 19:45:40.280185    9989 main.go:141] libmachine: (multinode-353000) DBG | Found match: 6e:10:a7:68:76:8c
	I0610 19:45:40.280206    9989 main.go:141] libmachine: (multinode-353000) DBG | IP: 192.169.0.19
	I0610 19:45:40.280241    9989 main.go:141] libmachine: (multinode-353000) Calling .GetConfigRaw
	I0610 19:45:40.280942    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:40.281154    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:45:40.281614    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:45:40.281625    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:40.281737    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:40.281835    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:40.281925    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282030    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:40.282140    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:40.282302    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:40.282507    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:40.282515    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:45:40.285439    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:45:40.338413    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:45:40.339064    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.339085    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.339092    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.339099    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.721279    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:45:40.721293    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:45:40.835864    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:45:40.835901    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:45:40.835915    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:45:40.835928    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:45:40.836766    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:45:40.836785    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:45:46.073475    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:45:46.073515    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:45:46.073529    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:45:46.097300    9989 main.go:141] libmachine: (multinode-353000) DBG | 2024/06/10 19:45:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:45:51.340943    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:45:51.340958    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341127    9989 buildroot.go:166] provisioning hostname "multinode-353000"
	I0610 19:45:51.341138    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.341240    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.341331    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.341432    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341515    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.341599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.341733    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.341882    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.341891    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000 && echo "multinode-353000" | sudo tee /etc/hostname
	I0610 19:45:51.407130    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000
	
	I0610 19:45:51.407155    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.407278    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.407374    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407468    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.407561    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.407694    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.407848    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.407859    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:45:51.468420    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:45:51.468442    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:45:51.468459    9989 buildroot.go:174] setting up certificates
	I0610 19:45:51.468467    9989 provision.go:84] configureAuth start
	I0610 19:45:51.468474    9989 main.go:141] libmachine: (multinode-353000) Calling .GetMachineName
	I0610 19:45:51.468599    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:51.468700    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.468783    9989 provision.go:143] copyHostCerts
	I0610 19:45:51.468813    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.468881    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:45:51.468890    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:45:51.469023    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:45:51.469222    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469262    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:45:51.469268    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:45:51.469346    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:45:51.469495    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469543    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:45:51.469552    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:45:51.469665    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:45:51.469841    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000 san=[127.0.0.1 192.169.0.19 localhost minikube multinode-353000]
	I0610 19:45:51.574939    9989 provision.go:177] copyRemoteCerts
	I0610 19:45:51.575027    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:45:51.575057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.575258    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.575433    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.575607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.575800    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:51.610260    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:45:51.610345    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 19:45:51.630147    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:45:51.630204    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:45:51.650528    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:45:51.650589    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:45:51.670054    9989 provision.go:87] duration metric: took 201.581041ms to configureAuth
	I0610 19:45:51.670067    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:45:51.670242    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:45:51.670255    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:51.670386    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.670503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.670607    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670720    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.670803    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.670922    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.671045    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.671053    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:45:51.726480    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:45:51.726495    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:45:51.726575    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:45:51.726593    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.726736    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.726853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.726941    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.727024    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.727157    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.727300    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.727345    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:45:51.793222    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:45:51.793246    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:51.793378    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:51.793475    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793564    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:51.793652    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:51.793772    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:51.793927    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:51.793939    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:45:53.421030    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:45:53.421054    9989 machine.go:97] duration metric: took 13.139887748s to provisionDockerMachine
	I0610 19:45:53.421087    9989 start.go:293] postStartSetup for "multinode-353000" (driver="hyperkit")
	I0610 19:45:53.421100    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:45:53.421124    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.421309    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:45:53.421321    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.421404    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.421503    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.421591    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.421689    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.456942    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:45:53.459812    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:45:53.459822    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:45:53.459827    9989 command_runner.go:130] > ID=buildroot
	I0610 19:45:53.459833    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:45:53.459840    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:45:53.459988    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:45:53.459999    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:45:53.460114    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:45:53.460308    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:45:53.460314    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:45:53.460524    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:45:53.467718    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:45:53.486520    9989 start.go:296] duration metric: took 65.424192ms for postStartSetup
	I0610 19:45:53.486540    9989 fix.go:56] duration metric: took 13.392941824s for fixHost
	I0610 19:45:53.486552    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.486683    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.486777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486853    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.486935    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.487060    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:45:53.487195    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.19 22 <nil> <nil>}
	I0610 19:45:53.487202    9989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:45:53.540939    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718073953.908242527
	
	I0610 19:45:53.540950    9989 fix.go:216] guest clock: 1718073953.908242527
	I0610 19:45:53.540963    9989 fix.go:229] Guest: 2024-06-10 19:45:53.908242527 -0700 PDT Remote: 2024-06-10 19:45:53.486543 -0700 PDT m=+13.831437270 (delta=421.699527ms)
	I0610 19:45:53.540982    9989 fix.go:200] guest clock delta is within tolerance: 421.699527ms
	I0610 19:45:53.540986    9989 start.go:83] releasing machines lock for "multinode-353000", held for 13.447423727s
	I0610 19:45:53.541004    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541129    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:45:53.541236    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541536    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541646    9989 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:45:53.541706    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:45:53.541734    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541762    9989 ssh_runner.go:195] Run: cat /version.json
	I0610 19:45:53.541777    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:45:53.541836    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541857    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:45:53.541939    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.541956    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:45:53.542057    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542069    9989 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:45:53.542145    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.542159    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:45:53.621904    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:45:53.622832    9989 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 19:45:53.623012    9989 ssh_runner.go:195] Run: systemctl --version
	I0610 19:45:53.628064    9989 command_runner.go:130] > systemd 252 (252)
	I0610 19:45:53.628086    9989 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 19:45:53.628210    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:45:53.632390    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:45:53.632443    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:45:53.632487    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:45:53.644499    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:45:53.644515    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:45:53.644525    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.644620    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:53.659247    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:45:53.659535    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:45:53.668457    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:45:53.677198    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:45:53.677239    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:45:53.685876    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.694608    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:45:53.703186    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:45:53.711800    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:45:53.720598    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:45:53.729427    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:45:53.738123    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:45:53.747019    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:45:53.754733    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:45:53.754901    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:45:53.762666    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:53.871758    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:45:53.891305    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:45:53.891381    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:45:53.902978    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:45:53.903571    9989 command_runner.go:130] > [Unit]
	I0610 19:45:53.903596    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:45:53.903615    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:45:53.903621    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:45:53.903625    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:45:53.903632    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:45:53.903636    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:45:53.903639    9989 command_runner.go:130] > [Service]
	I0610 19:45:53.903642    9989 command_runner.go:130] > Type=notify
	I0610 19:45:53.903647    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:45:53.903653    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:45:53.903663    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:45:53.903670    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:45:53.903675    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:45:53.903681    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:45:53.903687    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:45:53.903693    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:45:53.903705    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:45:53.903711    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:45:53.903716    9989 command_runner.go:130] > ExecStart=
	I0610 19:45:53.903727    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:45:53.903732    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:45:53.903739    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:45:53.903744    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:45:53.903748    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:45:53.903751    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:45:53.903755    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:45:53.903763    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:45:53.903768    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:45:53.903771    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:45:53.903775    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:45:53.903780    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:45:53.903783    9989 command_runner.go:130] > Delegate=yes
	I0610 19:45:53.903788    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:45:53.903792    9989 command_runner.go:130] > KillMode=process
	I0610 19:45:53.903795    9989 command_runner.go:130] > [Install]
	I0610 19:45:53.903804    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:45:53.903867    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.918134    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:45:53.937012    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:45:53.947454    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.957667    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:45:53.978657    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:45:53.989706    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:45:54.004573    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:45:54.004840    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:45:54.007767    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:45:54.007939    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:45:54.015068    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:45:54.028412    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:45:54.125186    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:45:54.244241    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:45:54.244317    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:45:54.259051    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:45:54.351224    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:45:56.651603    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.30043865s)
	I0610 19:45:56.651667    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 19:45:56.662260    9989 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 19:47:54.346370    9989 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m57.688173109s)
	I0610 19:47:54.346439    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.357366    9989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 19:47:54.453493    9989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 19:47:54.558404    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.660727    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 19:47:54.674518    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 19:47:54.685725    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:54.789246    9989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 19:47:54.849081    9989 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 19:47:54.849165    9989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 19:47:54.853149    9989 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0610 19:47:54.853161    9989 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 19:47:54.853166    9989 command_runner.go:130] > Device: 0,22	Inode: 754         Links: 1
	I0610 19:47:54.853172    9989 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0610 19:47:54.853177    9989 command_runner.go:130] > Access: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853185    9989 command_runner.go:130] > Modify: 2024-06-11 02:47:55.209828807 +0000
	I0610 19:47:54.853193    9989 command_runner.go:130] > Change: 2024-06-11 02:47:55.210828405 +0000
	I0610 19:47:54.853197    9989 command_runner.go:130] >  Birth: -
	I0610 19:47:54.853348    9989 start.go:562] Will wait 60s for crictl version
	I0610 19:47:54.853398    9989 ssh_runner.go:195] Run: which crictl
	I0610 19:47:54.856865    9989 command_runner.go:130] > /usr/bin/crictl
	I0610 19:47:54.856953    9989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 19:47:54.886614    9989 command_runner.go:130] > Version:  0.1.0
	I0610 19:47:54.886666    9989 command_runner.go:130] > RuntimeName:  docker
	I0610 19:47:54.886674    9989 command_runner.go:130] > RuntimeVersion:  26.1.4
	I0610 19:47:54.886680    9989 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 19:47:54.887717    9989 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.4
	RuntimeApiVersion:  v1
	I0610 19:47:54.887786    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.903316    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.904109    9989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 19:47:54.921823    9989 command_runner.go:130] > 26.1.4
	I0610 19:47:54.965802    9989 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.4 ...
	I0610 19:47:54.965890    9989 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:47:54.966288    9989 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0610 19:47:54.971034    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:54.981371    9989 kubeadm.go:877] updating cluster {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 19:47:54.981452    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:47:54.981509    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:54.993718    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:54.993732    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:54.993737    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:54.993741    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:54.993744    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:54.993748    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:54.993753    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:54.993756    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:54.993761    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:54.993765    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:54.994255    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:54.994266    9989 docker.go:615] Images already preloaded, skipping extraction
	I0610 19:47:54.994336    9989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 19:47:55.006339    9989 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0610 19:47:55.006352    9989 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0610 19:47:55.006356    9989 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 19:47:55.006360    9989 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0610 19:47:55.006363    9989 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0610 19:47:55.006379    9989 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0610 19:47:55.006385    9989 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0610 19:47:55.006390    9989 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0610 19:47:55.006394    9989 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 19:47:55.006398    9989 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0610 19:47:55.006906    9989 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0610 19:47:55.006921    9989 cache_images.go:84] Images are preloaded, skipping loading
	I0610 19:47:55.006932    9989 kubeadm.go:928] updating node { 192.169.0.19 8443 v1.30.1 docker true true} ...
	I0610 19:47:55.007008    9989 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-353000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 19:47:55.007079    9989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 19:47:55.025485    9989 command_runner.go:130] > cgroupfs
	I0610 19:47:55.026122    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:47:55.026131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:47:55.026139    9989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 19:47:55.026158    9989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.19 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-353000 NodeName:multinode-353000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 19:47:55.026249    9989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-353000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 19:47:55.026311    9989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 19:47:55.034754    9989 command_runner.go:130] > kubeadm
	I0610 19:47:55.034764    9989 command_runner.go:130] > kubectl
	I0610 19:47:55.034767    9989 command_runner.go:130] > kubelet
	I0610 19:47:55.034842    9989 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 19:47:55.034886    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 19:47:55.042800    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0610 19:47:55.056385    9989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 19:47:55.069690    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0610 19:47:55.083214    9989 ssh_runner.go:195] Run: grep 192.169.0.19	control-plane.minikube.internal$ /etc/hosts
	I0610 19:47:55.086096    9989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 19:47:55.096237    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:47:55.195683    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:47:55.209046    9989 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000 for IP: 192.169.0.19
	I0610 19:47:55.209070    9989 certs.go:194] generating shared ca certs ...
	I0610 19:47:55.209087    9989 certs.go:226] acquiring lock for ca certs: {Name:mkb8782270d93d160af8329e99f7f211e7b6b737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.209270    9989 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key
	I0610 19:47:55.209345    9989 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key
	I0610 19:47:55.209355    9989 certs.go:256] generating profile certs ...
	I0610 19:47:55.209458    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key
	I0610 19:47:55.209537    9989 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key.6aa173b6
	I0610 19:47:55.209630    9989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key
	I0610 19:47:55.209637    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 19:47:55.209659    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 19:47:55.209677    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 19:47:55.209695    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 19:47:55.209716    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 19:47:55.209746    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 19:47:55.209778    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 19:47:55.209796    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 19:47:55.209888    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem (1338 bytes)
	W0610 19:47:55.209936    9989 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485_empty.pem, impossibly tiny 0 bytes
	I0610 19:47:55.209945    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 19:47:55.209987    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem (1082 bytes)
	I0610 19:47:55.210029    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem (1123 bytes)
	I0610 19:47:55.210067    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem (1679 bytes)
	I0610 19:47:55.210150    9989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:47:55.210197    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem -> /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.210218    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.210236    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.210677    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 19:47:55.243710    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 19:47:55.274291    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 19:47:55.304150    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 19:47:55.327241    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 19:47:55.347168    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 19:47:55.366973    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 19:47:55.386745    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 19:47:55.406837    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/6485.pem --> /usr/share/ca-certificates/6485.pem (1338 bytes)
	I0610 19:47:55.426587    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /usr/share/ca-certificates/64852.pem (1708 bytes)
	I0610 19:47:55.446314    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 19:47:55.466320    9989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 19:47:55.480094    9989 ssh_runner.go:195] Run: openssl version
	I0610 19:47:55.484173    9989 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 19:47:55.484381    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6485.pem && ln -fs /usr/share/ca-certificates/6485.pem /etc/ssl/certs/6485.pem"
	I0610 19:47:55.492857    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496253    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496359    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 11 01:57 /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.496397    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6485.pem
	I0610 19:47:55.500429    9989 command_runner.go:130] > 51391683
	I0610 19:47:55.500562    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6485.pem /etc/ssl/certs/51391683.0"
	I0610 19:47:55.508913    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64852.pem && ln -fs /usr/share/ca-certificates/64852.pem /etc/ssl/certs/64852.pem"
	I0610 19:47:55.517404    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520837    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.520969    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 11 01:57 /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.521015    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64852.pem
	I0610 19:47:55.525079    9989 command_runner.go:130] > 3ec20f2e
	I0610 19:47:55.525226    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 19:47:55.533665    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 19:47:55.542055    9989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545479    9989 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545578    9989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 11 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.545613    9989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 19:47:55.549597    9989 command_runner.go:130] > b5213941
	I0610 19:47:55.549850    9989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 19:47:55.558357    9989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561717    9989 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 19:47:55.561732    9989 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 19:47:55.561740    9989 command_runner.go:130] > Device: 253,1	Inode: 8384328     Links: 1
	I0610 19:47:55.561749    9989 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:47:55.561758    9989 command_runner.go:130] > Access: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561763    9989 command_runner.go:130] > Modify: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561770    9989 command_runner.go:130] > Change: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561776    9989 command_runner.go:130] >  Birth: 2024-06-11 02:40:08.606464981 +0000
	I0610 19:47:55.561913    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 19:47:55.566014    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.566161    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 19:47:55.570209    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.570381    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 19:47:55.574601    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.574837    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 19:47:55.578866    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.579032    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 19:47:55.583114    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.583281    9989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 19:47:55.587426    9989 command_runner.go:130] > Certificate will not expire
	I0610 19:47:55.587558    9989 kubeadm.go:391] StartCluster: {Name:multinode-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:multinode-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.20 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.21 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:47:55.587674    9989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.599645    9989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 19:47:55.607448    9989 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0610 19:47:55.607459    9989 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0610 19:47:55.607466    9989 command_runner.go:130] > /var/lib/minikube/etcd:
	I0610 19:47:55.607470    9989 command_runner.go:130] > member
	W0610 19:47:55.607549    9989 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 19:47:55.607559    9989 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 19:47:55.607568    9989 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 19:47:55.607620    9989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 19:47:55.615074    9989 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:47:55.615382    9989 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-353000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.615468    9989 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-5942/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-353000" cluster setting kubeconfig missing "multinode-353000" context setting]
	I0610 19:47:55.615649    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:47:55.616397    9989 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:47:55.616577    9989 kapi.go:59] client config for multinode-353000: &rest.Config{Host:"https://192.169.0.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-5942/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x89f9600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 19:47:55.616926    9989 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 19:47:55.617061    9989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 19:47:55.624482    9989 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.19
	I0610 19:47:55.624500    9989 kubeadm.go:1154] stopping kube-system containers ...
	I0610 19:47:55.624549    9989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 19:47:55.638294    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.638306    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.638309    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.638314    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.638319    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.638322    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.638326    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.638329    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.638332    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.638345    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.638349    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.638352    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.638355    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.638358    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.638362    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.638365    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.638951    9989 docker.go:483] Stopping containers: [deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a]
	I0610 19:47:55.639021    9989 ssh_runner.go:195] Run: docker stop deba067632e3 130521568c69 f43f6c7bede5 5cbb1f284883 f854aa2e2bd3 1b251ec109bf 75aef0f938fa 5e434eeac16f 496239ba9459 4f9c6abaf085 e847ea1ccea3 254a0e0afe62 0e7e3b74d4e9 4479d5328ed8 4a744abd670d 2627ea28857a
	I0610 19:47:55.653484    9989 command_runner.go:130] > deba067632e3
	I0610 19:47:55.653495    9989 command_runner.go:130] > 130521568c69
	I0610 19:47:55.653500    9989 command_runner.go:130] > f43f6c7bede5
	I0610 19:47:55.653503    9989 command_runner.go:130] > 5cbb1f284883
	I0610 19:47:55.653506    9989 command_runner.go:130] > f854aa2e2bd3
	I0610 19:47:55.653624    9989 command_runner.go:130] > 1b251ec109bf
	I0610 19:47:55.653629    9989 command_runner.go:130] > 75aef0f938fa
	I0610 19:47:55.653632    9989 command_runner.go:130] > 5e434eeac16f
	I0610 19:47:55.653791    9989 command_runner.go:130] > 496239ba9459
	I0610 19:47:55.653797    9989 command_runner.go:130] > 4f9c6abaf085
	I0610 19:47:55.653800    9989 command_runner.go:130] > e847ea1ccea3
	I0610 19:47:55.653803    9989 command_runner.go:130] > 254a0e0afe62
	I0610 19:47:55.653806    9989 command_runner.go:130] > 0e7e3b74d4e9
	I0610 19:47:55.653844    9989 command_runner.go:130] > 4479d5328ed8
	I0610 19:47:55.653850    9989 command_runner.go:130] > 4a744abd670d
	I0610 19:47:55.653853    9989 command_runner.go:130] > 2627ea28857a
	I0610 19:47:55.654638    9989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 19:47:55.667514    9989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 19:47:55.674892    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 19:47:55.674904    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 19:47:55.674910    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 19:47:55.674930    9989 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674992    9989 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 19:47:55.674999    9989 kubeadm.go:156] found existing configuration files:
	
	I0610 19:47:55.675040    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 19:47:55.682287    9989 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682303    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 19:47:55.682341    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 19:47:55.689835    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 19:47:55.696884    9989 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696902    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 19:47:55.696953    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 19:47:55.704404    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.711485    9989 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711508    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 19:47:55.711548    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 19:47:55.718937    9989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 19:47:55.726127    9989 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726146    9989 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 19:47:55.726181    9989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 19:47:55.733619    9989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 19:47:55.741255    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.804058    9989 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 19:47:55.804120    9989 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 19:47:55.804305    9989 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 19:47:55.804483    9989 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 19:47:55.804689    9989 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0610 19:47:55.804862    9989 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0610 19:47:55.805120    9989 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0610 19:47:55.805265    9989 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0610 19:47:55.805411    9989 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0610 19:47:55.805605    9989 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 19:47:55.805743    9989 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 19:47:55.806676    9989 command_runner.go:130] > [certs] Using the existing "sa" key
	I0610 19:47:55.806774    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:55.845988    9989 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 19:47:55.886933    9989 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 19:47:56.013943    9989 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 19:47:56.065755    9989 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 19:47:56.199902    9989 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 19:47:56.356026    9989 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 19:47:56.358145    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.407409    9989 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 19:47:56.408002    9989 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 19:47:56.408066    9989 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 19:47:56.513337    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.563955    9989 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 19:47:56.563969    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 19:47:56.570350    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 19:47:56.570701    9989 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 19:47:56.571965    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:47:56.651317    9989 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 19:47:56.653781    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:47:56.653842    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.154036    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.654114    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:47:57.666427    9989 command_runner.go:130] > 1536
	I0610 19:47:57.666488    9989 api_server.go:72] duration metric: took 1.012757588s to wait for apiserver process to appear ...
	I0610 19:47:57.666498    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:47:57.666515    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.438002    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.438019    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.438029    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.455738    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 19:47:59.455759    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 19:47:59.667766    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:47:59.672313    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:47:59.672324    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.166779    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.171966    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 19:48:00.171979    9989 api_server.go:103] status: https://192.169.0.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 19:48:00.666724    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:00.671558    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:00.671622    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:00.671627    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:00.671635    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:00.671638    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:00.683001    9989 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 19:48:00.683015    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:00.683020    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:00.683023    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:00.683026    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:00.683029    9989 round_trippers.go:580]     Audit-Id: 09da700d-8425-4926-9374-2d6528bd7bb9
	I0610 19:48:00.683033    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:00.683035    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:00.683038    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:00.683058    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:00.683109    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:00.683119    9989 api_server.go:131] duration metric: took 3.016721791s to wait for apiserver health ...
	I0610 19:48:00.683126    9989 cni.go:84] Creating CNI manager for ""
	I0610 19:48:00.683131    9989 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 19:48:00.722329    9989 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 19:48:00.744311    9989 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 19:48:00.748261    9989 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 19:48:00.748273    9989 command_runner.go:130] >   Size: 2781656   	Blocks: 5440       IO Block: 4096   regular file
	I0610 19:48:00.748278    9989 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0610 19:48:00.748283    9989 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 19:48:00.748290    9989 command_runner.go:130] > Access: 2024-06-11 02:45:50.361198634 +0000
	I0610 19:48:00.748295    9989 command_runner.go:130] > Modify: 2024-06-06 15:35:25.000000000 +0000
	I0610 19:48:00.748300    9989 command_runner.go:130] > Change: 2024-06-11 02:45:47.690352312 +0000
	I0610 19:48:00.748303    9989 command_runner.go:130] >  Birth: -
	I0610 19:48:00.748470    9989 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 19:48:00.748478    9989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 19:48:00.778024    9989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 19:48:01.117060    9989 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.147629    9989 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 19:48:01.301672    9989 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 19:48:01.356197    9989 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 19:48:01.357762    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:01.357819    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.357825    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.357831    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.357834    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.361084    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:01.361095    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.361101    9989 round_trippers.go:580]     Audit-Id: 0a68b78a-1971-4606-9c89-6dd28309d599
	I0610 19:48:01.361107    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.361112    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.361115    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.361118    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.361121    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.362367    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.365313    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:01.365340    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 19:48:01.365347    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 19:48:01.365352    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:01.365356    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0610 19:48:01.365362    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:01.365367    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 19:48:01.365371    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 19:48:01.365374    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:01.365377    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:01.365381    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0610 19:48:01.365385    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 19:48:01.365390    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0610 19:48:01.365395    9989 system_pods.go:74] duration metric: took 7.626153ms to wait for pod list to return data ...
	I0610 19:48:01.365403    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:01.365440    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:01.365444    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.365450    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.365454    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.367622    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.367635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.367640    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:01 GMT
	I0610 19:48:01.367653    9989 round_trippers.go:580]     Audit-Id: 9ef6ecc8-1407-4850-b836-c92476875d2b
	I0610 19:48:01.367661    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.367666    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.367671    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.367674    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.367975    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"909"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15572 chars]
	I0610 19:48:01.368527    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368541    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368549    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368552    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368556    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:01.368559    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:01.368563    9989 node_conditions.go:105] duration metric: took 3.15591ms to run NodePressure ...
	I0610 19:48:01.368573    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 19:48:01.551683    9989 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 19:48:01.669147    9989 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 19:48:01.670157    9989 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 19:48:01.670212    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0610 19:48:01.670218    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.670224    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.670227    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.674624    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:01.674636    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.674641    9989 round_trippers.go:580]     Audit-Id: c47f63c6-e6e7-4d8d-b049-a6e6efe1f028
	I0610 19:48:01.674644    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.674650    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.674654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.674656    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.674659    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.675233    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30912 chars]
	I0610 19:48:01.675943    9989 kubeadm.go:733] kubelet initialised
	I0610 19:48:01.675953    9989 kubeadm.go:734] duration metric: took 5.786634ms waiting for restarted kubelet to initialise ...
	I0610 19:48:01.675959    9989 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:01.676001    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:01.676006    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.676012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.676015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.678521    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.678536    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.678546    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.678551    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.678555    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.678558    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.678562    9989 round_trippers.go:580]     Audit-Id: 695aab2d-7185-4ab8-93db-4232865056b6
	I0610 19:48:01.678564    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.679581    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"916"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88055 chars]
	I0610 19:48:01.681433    9989 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.681482    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:01.681487    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.681493    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.681497    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.683281    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.683286    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.683290    9989 round_trippers.go:580]     Audit-Id: ebbbfe81-a38f-4a3c-8e5c-90703473f744
	I0610 19:48:01.683293    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.683296    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.683308    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.683313    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.683316    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.683580    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:01.683874    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.683881    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.683887    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.683891    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686546    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.686555    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.686561    9989 round_trippers.go:580]     Audit-Id: 2892fe1d-d0a8-4261-8bf0-3133e5e2a446
	I0610 19:48:01.686565    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.686568    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.686571    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.686575    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.686578    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.686656    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.686844    9989 pod_ready.go:97] node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686854    9989 pod_ready.go:81] duration metric: took 5.411979ms for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.686861    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.686867    9989 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.686904    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:01.686909    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.686915    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.686918    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.688977    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.688986    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.688991    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.688996    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.689002    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.689007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.689011    9989 round_trippers.go:580]     Audit-Id: 3ace8889-aedb-4a19-9411-27b71b8a2e0b
	I0610 19:48:01.689015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.689291    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:01.689535    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.689542    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.689547    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.689550    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.690829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.690836    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.690841    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.690845    9989 round_trippers.go:580]     Audit-Id: 2f32a662-31a6-4053-8a84-be837537cd4c
	I0610 19:48:01.690848    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.690851    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.690855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.690858    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.691071    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.691242    9989 pod_ready.go:97] node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691252    9989 pod_ready.go:81] duration metric: took 4.380161ms for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.691258    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "etcd-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.691269    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.691301    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:01.691306    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.691311    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.691315    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.692447    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.692457    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.692462    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.692466    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.692469    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.692471    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.692474    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.692476    9989 round_trippers.go:580]     Audit-Id: bad7c45b-bf08-4758-a569-97c3dc9eafb6
	I0610 19:48:01.692666    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"893","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0610 19:48:01.692920    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.692926    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.692932    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.692936    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.694073    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.694081    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.694086    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.694089    9989 round_trippers.go:580]     Audit-Id: 98fa13c5-25d7-4e14-b2a2-7560361baffd
	I0610 19:48:01.694092    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.694095    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.694098    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.694100    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.694341    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.694500    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694509    9989 pod_ready.go:81] duration metric: took 3.23437ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.694514    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-apiserver-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.694519    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.694545    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:01.694549    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.694555    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.694559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.695753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.695761    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.695766    9989 round_trippers.go:580]     Audit-Id: a7d05f7f-1539-4d5f-9fe3-3695667a8deb
	I0610 19:48:01.695770    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.695772    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.695775    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.695777    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.695780    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.695988    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"895","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0610 19:48:01.757966    9989 request.go:629] Waited for 61.697059ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758041    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:01.758048    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.758053    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.758057    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.759756    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:01.759766    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.759773    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.759779    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.759783    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.759788    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.759793    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.759806    9989 round_trippers.go:580]     Audit-Id: e8ae6de5-f7c9-4f36-881c-ed09a8012b60
	I0610 19:48:01.759959    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:01.760178    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760188    9989 pod_ready.go:81] duration metric: took 65.665915ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:01.760194    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-controller-manager-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:01.760200    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:01.959909    9989 request.go:629] Waited for 199.659235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960065    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:01.960075    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:01.960086    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:01.960093    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:01.962763    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:01.962778    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:01.962785    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:01.962789    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:01.962793    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:01.962819    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:01.962827    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:01.962832    9989 round_trippers.go:580]     Audit-Id: e27af578-4ca0-4cfe-8af3-b60f6b0fa9bd
	I0610 19:48:01.962941    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:02.158260    9989 request.go:629] Waited for 194.998097ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158342    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:02.158351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.158363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.158369    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.160892    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.160907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.160913    9989 round_trippers.go:580]     Audit-Id: 0bef1bb4-379d-409d-8e02-4dbc9a2811a4
	I0610 19:48:02.160918    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.160949    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.160957    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.160961    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.160968    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.161074    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"750","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:02.161324    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161336    9989 pod_ready.go:81] duration metric: took 401.144458ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.161344    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:02.161351    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.358390    9989 request.go:629] Waited for 196.956176ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358484    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:02.358496    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.358508    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.358515    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.360992    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.361021    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.361031    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.361036    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.361039    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.361043    9989 round_trippers.go:580]     Audit-Id: 6f8be12b-1957-417b-8d1b-e678c7792dd3
	I0610 19:48:02.361046    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.361051    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.361202    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:02.557934    9989 request.go:629] Waited for 196.31847ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.557999    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:02.558009    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.558037    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.558044    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.560427    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.560441    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.560448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.560454    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:02 GMT
	I0610 19:48:02.560458    9989 round_trippers.go:580]     Audit-Id: 4c41615e-621c-4a97-9365-ac7c1773c395
	I0610 19:48:02.560461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.560465    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.560468    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.560523    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:02.560758    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:02.560768    9989 pod_ready.go:81] duration metric: took 399.425236ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.560777    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:02.757957    9989 request.go:629] Waited for 197.131938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758066    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:02.758078    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.758089    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.758095    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.761202    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:02.761216    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.761223    9989 round_trippers.go:580]     Audit-Id: b73d177c-0cc8-4b3e-9eaa-58e1aca589bd
	I0610 19:48:02.761229    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.761233    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.761236    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.761240    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.761243    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.761619    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:02.958192    9989 request.go:629] Waited for 196.273854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958328    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:02.958342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:02.958357    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:02.958367    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:02.961275    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:02.961290    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:02.961297    9989 round_trippers.go:580]     Audit-Id: 55ebfcfe-9c2e-43ee-8757-62fb6711bcdf
	I0610 19:48:02.961302    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:02.961312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:02.961315    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:02.961320    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:02.961324    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:02.961498    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:02.961759    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961777    9989 pod_ready.go:81] duration metric: took 401.008697ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:02.961786    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-proxy-v7s4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:02.961792    9989 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:03.158219    9989 request.go:629] Waited for 196.363249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158365    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:03.158377    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.158388    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.158394    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.160987    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.161000    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.161007    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.161011    9989 round_trippers.go:580]     Audit-Id: 4b2e7508-8f47-4d7f-b4ea-f0310bd3d491
	I0610 19:48:03.161015    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.161019    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.161023    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.161027    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.161126    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"897","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0610 19:48:03.359868    9989 request.go:629] Waited for 198.409302ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.359998    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.360008    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.360020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.360027    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.362871    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:03.362892    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.362899    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.362904    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.362908    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.362916    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.362921    9989 round_trippers.go:580]     Audit-Id: ba3a2e04-447a-4800-872e-bbbc8698c7f3
	I0610 19:48:03.362931    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.363233    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:03.363483    9989 pod_ready.go:97] node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363503    9989 pod_ready.go:81] duration metric: took 401.718227ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:03.363511    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000" hosting pod "kube-scheduler-multinode-353000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000" has status "Ready":"False"
	I0610 19:48:03.363517    9989 pod_ready.go:38] duration metric: took 1.687604899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:03.363529    9989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 19:48:03.375111    9989 command_runner.go:130] > -16
	I0610 19:48:03.375245    9989 ops.go:34] apiserver oom_adj: -16
	I0610 19:48:03.375257    9989 kubeadm.go:591] duration metric: took 7.76794986s to restartPrimaryControlPlane
	I0610 19:48:03.375262    9989 kubeadm.go:393] duration metric: took 7.787982406s to StartCluster
	I0610 19:48:03.375275    9989 settings.go:142] acquiring lock: {Name:mkfdfd0a396b1866366b70895e6d936c4f7de68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.375367    9989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:48:03.375765    9989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/kubeconfig: {Name:mk17c26f5660619213da42e231c1cc432133f3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 19:48:03.376028    9989 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 19:48:03.400444    9989 out.go:177] * Verifying Kubernetes components...
	I0610 19:48:03.376041    9989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 19:48:03.376184    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:03.421565    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:03.463087    9989 out.go:177] * Enabled addons: 
	I0610 19:48:03.484252    9989 addons.go:510] duration metric: took 108.208716ms for enable addons: enabled=[]
	I0610 19:48:03.563649    9989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 19:48:03.576041    9989 node_ready.go:35] waiting up to 6m0s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:03.576103    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:03.576110    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:03.576116    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:03.576120    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:03.577625    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:03.577635    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:03.577640    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:03.577644    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:03 GMT
	I0610 19:48:03.577652    9989 round_trippers.go:580]     Audit-Id: 1a9b118d-1c1f-4a85-b573-ec6d65f2ea3e
	I0610 19:48:03.577656    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:03.577658    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:03.577661    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:03.577737    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.077472    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.077497    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.077513    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.077519    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.080273    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.080289    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.080298    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.080305    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.080311    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.080315    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.080320    9989 round_trippers.go:580]     Audit-Id: 1859e085-211f-4e27-92e7-f3b22958dff9
	I0610 19:48:04.080323    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.080687    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:04.577072    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:04.577095    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:04.577107    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:04.577115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:04.579474    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:04.579488    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:04.579496    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:04.579500    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:04.579505    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:04.579508    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:04 GMT
	I0610 19:48:04.579511    9989 round_trippers.go:580]     Audit-Id: d35268d8-5a6a-4b80-9fc5-c56ab0f588fa
	I0610 19:48:04.579516    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:04.579860    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"842","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5300 chars]
	I0610 19:48:05.077214    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.077238    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.077249    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.077255    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.079762    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.079777    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.079784    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.079788    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.079791    9989 round_trippers.go:580]     Audit-Id: 8db0d71b-506a-485d-b9c4-877536f220a0
	I0610 19:48:05.079795    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.079820    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.079828    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.079940    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.080178    9989 node_ready.go:49] node "multinode-353000" has status "Ready":"True"
	I0610 19:48:05.080194    9989 node_ready.go:38] duration metric: took 1.504185458s for node "multinode-353000" to be "Ready" ...
	I0610 19:48:05.080202    9989 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:05.080250    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:05.080258    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.080265    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.080270    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.082809    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.082818    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.082823    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.082827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.082831    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.082834    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.082836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.082839    9989 round_trippers.go:580]     Audit-Id: ddb615f3-2587-4f9c-8d81-31db61bb1a6e
	I0610 19:48:05.083922    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 87462 chars]
	I0610 19:48:05.085829    9989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:05.085871    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.085875    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.085881    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.085896    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.086914    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.086929    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.086937    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.086941    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.086944    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.086947    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.086957    9989 round_trippers.go:580]     Audit-Id: b4ad06e6-d502-42ac-9675-7f15e25621df
	I0610 19:48:05.086961    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.087093    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.087343    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.087350    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.087355    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.087359    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.088202    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:05.088209    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.088215    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.088221    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.088226    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.088231    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:05 GMT
	I0610 19:48:05.088236    9989 round_trippers.go:580]     Audit-Id: b6058267-b32d-4d28-9209-3e3c65514ada
	I0610 19:48:05.088239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.088425    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:05.586718    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:05.586742    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.586754    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.586759    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.589614    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:05.589627    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.589634    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.589639    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.589643    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.589648    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.589653    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.589657    9989 round_trippers.go:580]     Audit-Id: a2558bb6-21de-413e-adb7-2066705c0c39
	I0610 19:48:05.589740    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:05.590099    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:05.590114    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:05.590121    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:05.590127    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:05.591639    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:05.591647    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:05.591654    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:05.591672    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:05.591679    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:05.591683    9989 round_trippers.go:580]     Audit-Id: 2de87cae-73ae-440c-a6d4-90fb3f51f475
	I0610 19:48:05.591688    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:05.591709    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:05.591808    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.086573    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.086600    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.086612    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.086618    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.089412    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.089427    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.089434    9989 round_trippers.go:580]     Audit-Id: f7e13af5-b1a6-43d3-bb98-5aad49fca036
	I0610 19:48:06.089438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.089441    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.089446    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.089450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.089589    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.089977    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.089987    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.089994    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.089998    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.091344    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.091353    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.091358    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.091361    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:06 GMT
	I0610 19:48:06.091364    9989 round_trippers.go:580]     Audit-Id: 7a289ac0-a7eb-4e17-a539-34afa9d10e8f
	I0610 19:48:06.091367    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.091370    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.091372    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.091556    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:06.587106    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:06.587131    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.587143    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.587148    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.589792    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:06.589811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.589818    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.589822    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.589835    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.589840    9989 round_trippers.go:580]     Audit-Id: 1ec66f4a-3740-4406-bbd1-e5ca56116de6
	I0610 19:48:06.589843    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.589847    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.590009    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:06.590408    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:06.590419    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:06.590425    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:06.590431    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:06.591734    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:06.591742    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:06.591746    9989 round_trippers.go:580]     Audit-Id: 3ed956f5-c213-4c78-a89b-9a399e0d9f57
	I0610 19:48:06.591749    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:06.591752    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:06.591755    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:06.591758    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:06.591760    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:06.591853    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.086755    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.086817    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.086833    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.086840    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.089422    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.089436    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.089444    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.089448    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.089453    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.089456    9989 round_trippers.go:580]     Audit-Id: 3c2b2755-0928-4843-907f-76f6698cb531
	I0610 19:48:07.089461    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.089464    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.089848    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.090239    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.090248    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.090257    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.090263    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.091435    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.091442    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.091447    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.091461    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.091466    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:07 GMT
	I0610 19:48:07.091469    9989 round_trippers.go:580]     Audit-Id: a295b9b4-766e-4157-bafe-85b97af1b24f
	I0610 19:48:07.091473    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.091477    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.091632    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:07.091819    9989 pod_ready.go:102] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:07.586768    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:07.586789    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.586801    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.586811    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.589483    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:07.589501    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.589508    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.589513    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.589518    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.589523    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.589529    9989 round_trippers.go:580]     Audit-Id: 8f011804-7b53-46a0-8762-c6021b6b797c
	I0610 19:48:07.589533    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.589733    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:07.590139    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:07.590149    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:07.590157    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:07.590161    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:07.591411    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:07.591423    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:07.591431    9989 round_trippers.go:580]     Audit-Id: 32d80ac7-569b-4efe-b59c-6c43cc45cbb0
	I0610 19:48:07.591438    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:07.591442    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:07.591450    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:07.591455    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:07.591459    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:07.591711    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.085955    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.085978    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.085989    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.085995    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.088888    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:08.088905    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.088913    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.088917    9989 round_trippers.go:580]     Audit-Id: 6130cd3b-545c-4dab-bb4e-8509f6ca7583
	I0610 19:48:08.088921    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.088924    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.088929    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.088943    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.089331    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.089733    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.089743    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.089751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.089757    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.091163    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.091171    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.091176    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.091178    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:08 GMT
	I0610 19:48:08.091181    9989 round_trippers.go:580]     Audit-Id: fb4feb18-1294-4799-b740-01b7c906b714
	I0610 19:48:08.091183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.091187    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.091191    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.091368    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:08.586116    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:08.586130    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.586136    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.586139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.588086    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:08.588098    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.588103    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.588106    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.588108    9989 round_trippers.go:580]     Audit-Id: 0ee2c29d-3bee-4ce6-b7f8-9c58b599b3c3
	I0610 19:48:08.588111    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.588114    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.588116    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.588226    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"892","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0610 19:48:08.588519    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:08.588525    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:08.588531    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:08.588534    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:08.593668    9989 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 19:48:08.593684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:08.593689    9989 round_trippers.go:580]     Audit-Id: ad0e5c68-e6f8-4266-8198-de1fd97d7f9b
	I0610 19:48:08.593692    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:08.593694    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:08.593696    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:08.593699    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:08.593702    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:08.593773    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.086588    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x984g
	I0610 19:48:09.086618    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.086658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.086666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.089146    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.089159    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.089199    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.089213    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.089220    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.089227    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.089232    9989 round_trippers.go:580]     Audit-Id: f98d64ed-8706-40c8-bca0-af200ff708e8
	I0610 19:48:09.089239    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.089496    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6783 chars]
	I0610 19:48:09.089821    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.089828    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.089834    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.089837    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.090901    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.090910    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.090914    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.090918    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.090922    9989 round_trippers.go:580]     Audit-Id: 684d3cb2-4de8-4213-801b-a1b1cdca1ae6
	I0610 19:48:09.090926    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.090929    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.090932    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.091098    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.091288    9989 pod_ready.go:92] pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:09.091297    9989 pod_ready.go:81] duration metric: took 4.005597593s for pod "coredns-7db6d8ff4d-x984g" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091304    9989 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:09.091332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.091336    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.091342    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.091345    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.092345    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.092354    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.092359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.092364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.092368    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.092372    9989 round_trippers.go:580]     Audit-Id: 0ec593cf-ab0e-4393-b1d5-d458992d576c
	I0610 19:48:09.092378    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.092386    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.092510    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.092739    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.092746    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.092751    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.092754    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.093693    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:09.093703    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.093710    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.093716    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.093720    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:09 GMT
	I0610 19:48:09.093723    9989 round_trippers.go:580]     Audit-Id: cd7754ad-de2e-4337-95c9-5f8181bafe8a
	I0610 19:48:09.093726    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.093736    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.093852    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:09.591562    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:09.591592    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.591601    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.591606    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:09.593937    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.593942    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.593946    9989 round_trippers.go:580]     Audit-Id: a1e77184-60e5-45b7-991d-afda7283198c
	I0610 19:48:09.593949    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.593953    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.593955    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.593958    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.594184    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:09.594428    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:09.594435    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:09.594441    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:09.594444    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:09.595688    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:09.595698    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:09.595705    9989 round_trippers.go:580]     Audit-Id: b8c9b2c8-7992-42e7-9bf8-112b13ef8d15
	I0610 19:48:09.595711    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:09.595721    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:09.595729    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:09.595732    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:09.595734    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:09.595855    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.091896    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.091930    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.091948    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.091961    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.094812    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:10.094827    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.094833    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.094838    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.094842    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.094847    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.094850    9989 round_trippers.go:580]     Audit-Id: 36d914ed-5a76-4cfd-aea2-50d2467afc00
	I0610 19:48:10.094854    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.095220    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.095550    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.095559    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.095567    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.095572    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.097001    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.097008    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.097012    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.097016    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.097018    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.097021    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:10 GMT
	I0610 19:48:10.097031    9989 round_trippers.go:580]     Audit-Id: 69d6521e-fa5d-4f41-a0e6-1742e53a772b
	I0610 19:48:10.097034    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.097219    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:10.592589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:10.592613    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.592625    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.592631    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.595848    9989 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 19:48:10.595860    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.595867    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.595872    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.595876    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.595881    9989 round_trippers.go:580]     Audit-Id: 11308bab-1148-4a9a-9a2f-6d24ea1297c6
	I0610 19:48:10.595886    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.595890    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.595995    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:10.596332    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:10.596342    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:10.596350    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:10.596372    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:10.597763    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:10.597770    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:10.597776    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:10.597781    9989 round_trippers.go:580]     Audit-Id: 04f99b83-61e5-4bf2-8781-a0e87f56f205
	I0610 19:48:10.597786    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:10.597791    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:10.597794    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:10.597796    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:10.597950    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.092146    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.092175    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.092188    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.092244    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.094833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.094848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.094855    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.094859    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.094864    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.094869    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.094873    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.094877    9989 round_trippers.go:580]     Audit-Id: f1b5bd76-11e8-4009-a1d4-09ae141a7be4
	I0610 19:48:11.095063    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.095396    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.095405    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.095414    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.095420    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.096829    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.096837    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.096842    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.096845    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.096848    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.096851    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.096855    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:11 GMT
	I0610 19:48:11.096857    9989 round_trippers.go:580]     Audit-Id: 5edc3937-e4f9-4fc8-924f-f2f08684b9af
	I0610 19:48:11.097460    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:11.097661    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:11.592045    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:11.592069    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.592139    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.592150    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.594256    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:11.594268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.594276    9989 round_trippers.go:580]     Audit-Id: 22199be0-8b40-4afe-8222-00876ce24849
	I0610 19:48:11.594280    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.594284    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.594289    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.594292    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.594295    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.594751    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:11.595057    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:11.595064    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:11.595069    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:11.595073    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:11.596263    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:11.596270    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:11.596275    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:11.596277    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:11.596280    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:11.596282    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:11.596285    9989 round_trippers.go:580]     Audit-Id: 1e950ce6-6a1d-4fb4-862e-369bdd1c1b97
	I0610 19:48:11.596287    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:11.596438    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.091946    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.092024    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.092038    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.092047    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.094382    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:12.094392    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.094398    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.094402    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.094410    9989 round_trippers.go:580]     Audit-Id: fae3296c-1bb4-48d8-bb8a-365ebcc14279
	I0610 19:48:12.094421    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.094424    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.094428    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.094726    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.095092    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.095102    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.095110    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.095115    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.096329    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.096337    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.096342    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.096346    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:12 GMT
	I0610 19:48:12.096350    9989 round_trippers.go:580]     Audit-Id: 3613c759-c38d-4132-b7db-3ebfd2715c11
	I0610 19:48:12.096352    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.096355    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.096357    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.096531    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:12.591302    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:12.591317    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.591323    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.591326    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.592512    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.592525    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.592532    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.592537    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.592541    9989 round_trippers.go:580]     Audit-Id: b9eb3c47-6f8d-4edb-a70c-efdabd5c9569
	I0610 19:48:12.592545    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.592550    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.592554    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.592679    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:12.592922    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:12.592929    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:12.592935    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:12.592939    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:12.594275    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:12.594281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:12.594287    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:12.594291    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:12.594299    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:12.594306    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:12.594315    9989 round_trippers.go:580]     Audit-Id: 3687110f-6d7b-4d3c-a20f-dbbdac34123e
	I0610 19:48:12.594320    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:12.594536    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.092944    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.092964    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.092975    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.092980    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.094898    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.094907    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.094913    9989 round_trippers.go:580]     Audit-Id: 4746a862-34ed-4f9d-86e0-fe54a5c8b1f0
	I0610 19:48:13.094916    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.094920    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.094923    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.094926    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.094929    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.095280    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.095536    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.095548    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.095554    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.095559    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.096553    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:13.096561    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.096567    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:13 GMT
	I0610 19:48:13.096571    9989 round_trippers.go:580]     Audit-Id: 72e59267-4587-49b7-acec-8760fef789ba
	I0610 19:48:13.096574    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.096579    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.096583    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.096586    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.096715    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.591444    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:13.591547    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.591562    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.591569    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.593926    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:13.593942    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.593954    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.593964    9989 round_trippers.go:580]     Audit-Id: 4cb26672-7251-47d6-9956-9bd290658ddd
	I0610 19:48:13.593972    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.593977    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.593982    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.593989    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.594310    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:13.594645    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:13.594658    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:13.594666    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:13.594673    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:13.596261    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:13.596268    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:13.596273    9989 round_trippers.go:580]     Audit-Id: 67e85776-8134-4d60-b04e-6745575e0722
	I0610 19:48:13.596276    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:13.596280    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:13.596282    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:13.596286    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:13.596288    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:13.596582    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:13.596755    9989 pod_ready.go:102] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"False"
	I0610 19:48:14.091643    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.091719    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.091733    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.091741    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.094245    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.094280    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.094290    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.094312    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.094319    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.094323    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.094329    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.094332    9989 round_trippers.go:580]     Audit-Id: 950f168e-9ccc-4272-accd-6013766a76ca
	I0610 19:48:14.094657    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.094995    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.095005    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.095012    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.095015    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.096236    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.096244    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.096250    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:14 GMT
	I0610 19:48:14.096256    9989 round_trippers.go:580]     Audit-Id: eebd34cd-fcec-4d30-b2c0-a119875e2dbd
	I0610 19:48:14.096260    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.096265    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.096267    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.096270    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.096411    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:14.592108    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:14.592139    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.592184    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.592191    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.594672    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:14.594684    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.594691    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.594694    9989 round_trippers.go:580]     Audit-Id: 8d594bf6-b784-4c8a-aec0-2be7690404dc
	I0610 19:48:14.594698    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.594701    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.594705    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.594709    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.595294    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:14.595634    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:14.595643    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:14.595658    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:14.595665    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:14.596893    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:14.596900    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:14.596905    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:14.596917    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:14.596921    9989 round_trippers.go:580]     Audit-Id: 3dd28d6f-84f3-46df-9566-43f2d793ebd5
	I0610 19:48:14.596923    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:14.596927    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:14.596930    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:14.597086    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.091684    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.091716    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.091756    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.091765    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.094212    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.094225    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.094232    9989 round_trippers.go:580]     Audit-Id: e13d9f6e-c973-4ff8-873c-d7b8c4b8f56d
	I0610 19:48:15.094237    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.094242    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.094248    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.094252    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.094257    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.094341    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"889","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6581 chars]
	I0610 19:48:15.094659    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.094668    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.094675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.094680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.096045    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.096057    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.096064    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.096085    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.096094    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.096100    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:15 GMT
	I0610 19:48:15.096105    9989 round_trippers.go:580]     Audit-Id: 296d945a-df5f-46db-a534-d725c2470a49
	I0610 19:48:15.096109    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.096301    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.592832    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-353000
	I0610 19:48:15.592857    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.592866    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.592872    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.595717    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.595735    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.595746    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.595754    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.595772    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.595779    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.595786    9989 round_trippers.go:580]     Audit-Id: ae59896b-cf44-4f51-a715-f1122fd8af04
	I0610 19:48:15.595790    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.596233    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-353000","namespace":"kube-system","uid":"c0357ac6-e0e4-4275-8069-a75feabf5d34","resourceVersion":"958","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.19:2379","kubernetes.io/config.hash":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.mirror":"9eb52ce026e7c7e46d26037682204769","kubernetes.io/config.seen":"2024-06-11T02:40:16.411365624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6357 chars]
	I0610 19:48:15.596566    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.596576    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.596583    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.596597    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.597753    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.597760    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.597765    9989 round_trippers.go:580]     Audit-Id: b0d6cb8a-03a6-44b3-a2ba-bbdc0b1bb2cd
	I0610 19:48:15.597769    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.597774    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.597778    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.597781    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.597783    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.597942    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.598119    9989 pod_ready.go:92] pod "etcd-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.598127    9989 pod_ready.go:81] duration metric: took 6.507043423s for pod "etcd-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598142    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.598180    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-353000
	I0610 19:48:15.598184    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.598190    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.598194    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.599330    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.599339    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.599344    9989 round_trippers.go:580]     Audit-Id: 9ee40abb-4038-4697-bf98-1a8c08e3e5e7
	I0610 19:48:15.599355    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.599369    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.599374    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.599378    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.599383    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.599946    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-353000","namespace":"kube-system","uid":"10a38dbe-c328-4da3-b21c-efb415707889","resourceVersion":"954","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.19:8443","kubernetes.io/config.hash":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.mirror":"379016f65451a6745acaabe4de5bacb6","kubernetes.io/config.seen":"2024-06-11T02:40:16.411366586Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0610 19:48:15.600736    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.600744    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.600750    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.600755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.602146    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.602154    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.602161    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.602166    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.602170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.602172    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.602175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.602177    9989 round_trippers.go:580]     Audit-Id: c8e6ccc9-5c26-4e00-8c74-5394763932f0
	I0610 19:48:15.602374    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.602545    9989 pod_ready.go:92] pod "kube-apiserver-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.602554    9989 pod_ready.go:81] duration metric: took 4.406297ms for pod "kube-apiserver-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602560    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.602589    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-353000
	I0610 19:48:15.602593    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.602599    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.602603    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.603793    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.603799    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.603805    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.603809    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.603813    9989 round_trippers.go:580]     Audit-Id: 06801598-bd08-4f01-b582-51da8e9dc299
	I0610 19:48:15.603815    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.603817    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.603820    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.604059    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-353000","namespace":"kube-system","uid":"a8abe47a-46b7-414f-af2b-d13ea768b0f3","resourceVersion":"956","creationTimestamp":"2024-06-11T02:40:16Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.mirror":"e9f672133b1cdaee6a9f0e6a6099fe94","kubernetes.io/config.seen":"2024-06-11T02:40:16.411367292Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0610 19:48:15.604286    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.604293    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.604298    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.604303    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605338    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.605345    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.605350    9989 round_trippers.go:580]     Audit-Id: ef3b568d-cb90-461e-91e7-4aa6b5568300
	I0610 19:48:15.605353    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.605357    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.605360    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.605364    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.605373    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.605538    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.605703    9989 pod_ready.go:92] pod "kube-controller-manager-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.605711    9989 pod_ready.go:81] duration metric: took 3.145898ms for pod "kube-controller-manager-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605717    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.605744    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f6tzv
	I0610 19:48:15.605749    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.605755    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.605759    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.606810    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.606817    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.606822    9989 round_trippers.go:580]     Audit-Id: 9e88e041-c1ec-4328-a34c-7b5e2396785a
	I0610 19:48:15.606825    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.606827    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.606830    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.606833    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.606836    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.607062    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f6tzv","generateName":"kube-proxy-","namespace":"kube-system","uid":"22e7f1f1-ca20-45a1-8882-33dbab1cb5d1","resourceVersion":"740","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6056 chars]
	I0610 19:48:15.607284    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m03
	I0610 19:48:15.607291    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.607297    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.607301    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.608273    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.608281    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.608288    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.608294    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.608298    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.608301    9989 round_trippers.go:580]     Audit-Id: 9b407b86-eb01-4135-9dfb-f26b1633b27a
	I0610 19:48:15.608303    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.608306    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.608468    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m03","uid":"0a094baa-1150-4136-9618-902a6f952a4b","resourceVersion":"949","creationTimestamp":"2024-06-11T02:42:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_42_19_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:42:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 4411 chars]
	I0610 19:48:15.608621    9989 pod_ready.go:97] node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608630    9989 pod_ready.go:81] duration metric: took 2.908037ms for pod "kube-proxy-f6tzv" in "kube-system" namespace to be "Ready" ...
	E0610 19:48:15.608636    9989 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-353000-m03" hosting pod "kube-proxy-f6tzv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-353000-m03" has status "Ready":"Unknown"
	I0610 19:48:15.608641    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.608665    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz5rp
	I0610 19:48:15.608670    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.608675    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.608680    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.609749    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.609755    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.609759    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.609763    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.609766    9989 round_trippers.go:580]     Audit-Id: 9d2809bc-8920-4033-a980-81e0b514b51e
	I0610 19:48:15.609768    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.609771    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.609774    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.609923    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nz5rp","generateName":"kube-proxy-","namespace":"kube-system","uid":"8fd079c3-79d6-48f4-a419-3e75e3535a7d","resourceVersion":"502","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0610 19:48:15.610130    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000-m02
	I0610 19:48:15.610137    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.610142    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.610147    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.611124    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:15.611131    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.611136    9989 round_trippers.go:580]     Audit-Id: b7f93f53-711a-4909-8dfa-b5358e3edf06
	I0610 19:48:15.611163    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.611167    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.611170    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.611173    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.611175    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.611312    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000-m02","uid":"7f763f27-116d-41f3-ae0f-8fb05c6895c8","resourceVersion":"585","creationTimestamp":"2024-06-11T02:41:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_06_10T19_41_05_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:41:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3824 chars]
	I0610 19:48:15.611447    9989 pod_ready.go:92] pod "kube-proxy-nz5rp" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.611454    9989 pod_ready.go:81] duration metric: took 2.808014ms for pod "kube-proxy-nz5rp" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.611459    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.794030    9989 request.go:629] Waited for 182.512666ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794147    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v7s4q
	I0610 19:48:15.794157    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.794169    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.794177    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.796912    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:15.796926    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.796934    9989 round_trippers.go:580]     Audit-Id: 3854ac46-1b79-4426-8236-7591cc550ae2
	I0610 19:48:15.796938    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.796942    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.796946    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.796978    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.796983    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.797082    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-v7s4q","generateName":"kube-proxy-","namespace":"kube-system","uid":"facfe7a3-8b6b-4328-b0ce-de6504ad189e","resourceVersion":"919","creationTimestamp":"2024-06-11T02:40:31Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"11b990fb-c4a5-453a-abff-58afa71f4a04","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"11b990fb-c4a5-453a-abff-58afa71f4a04\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6028 chars]
	I0610 19:48:15.994033    9989 request.go:629] Waited for 196.636422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994102    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:15.994108    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:15.994117    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:15.994122    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:15.995838    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:15.995848    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:15.995853    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:15.995857    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:15.995860    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:15.995863    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:15.995866    9989 round_trippers.go:580]     Audit-Id: 038e8b7e-5833-4987-8dec-d70fd06fd8f3
	I0610 19:48:15.995869    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:15.996172    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:15.996363    9989 pod_ready.go:92] pod "kube-proxy-v7s4q" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:15.996371    9989 pod_ready.go:81] duration metric: took 384.920541ms for pod "kube-proxy-v7s4q" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:15.996378    9989 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.194182    9989 request.go:629] Waited for 197.750366ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194292    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-353000
	I0610 19:48:16.194302    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.194312    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.194320    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.196795    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.196809    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.196822    9989 round_trippers.go:580]     Audit-Id: 038d5bdb-1b7f-4b04-89c8-33d598c4b1d6
	I0610 19:48:16.196840    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.196849    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.196855    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.196880    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.196889    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.197056    9989 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-353000","namespace":"kube-system","uid":"8fce8cdd-f6c1-4350-93fe-050f169721bb","resourceVersion":"943","creationTimestamp":"2024-06-11T02:40:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.mirror":"e32526d44d593e1d706155fc44f1f9db","kubernetes.io/config.seen":"2024-06-11T02:40:11.487556570Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0610 19:48:16.393212    9989 request.go:629] Waited for 195.873626ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393266    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes/multinode-353000
	I0610 19:48:16.393272    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.393278    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.393282    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.395123    9989 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 19:48:16.395136    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.395141    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.395145    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.395150    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.395153    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.395155    9989 round_trippers.go:580]     Audit-Id: ab94a6ed-7607-433e-8303-56582026becf
	I0610 19:48:16.395158    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.395272    9989 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-06-11T02:40:14Z","fieldsType":"FieldsV1","fi [truncated 5173 chars]
	I0610 19:48:16.395463    9989 pod_ready.go:92] pod "kube-scheduler-multinode-353000" in "kube-system" namespace has status "Ready":"True"
	I0610 19:48:16.395471    9989 pod_ready.go:81] duration metric: took 399.102366ms for pod "kube-scheduler-multinode-353000" in "kube-system" namespace to be "Ready" ...
	I0610 19:48:16.395478    9989 pod_ready.go:38] duration metric: took 11.315661502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 19:48:16.395490    9989 api_server.go:52] waiting for apiserver process to appear ...
	I0610 19:48:16.395535    9989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:48:16.407763    9989 command_runner.go:130] > 1536
	I0610 19:48:16.407838    9989 api_server.go:72] duration metric: took 13.032244276s to wait for apiserver process to appear ...
	I0610 19:48:16.407853    9989 api_server.go:88] waiting for apiserver healthz status ...
	I0610 19:48:16.407872    9989 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:48:16.410818    9989 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:48:16.410851    9989 round_trippers.go:463] GET https://192.169.0.19:8443/version
	I0610 19:48:16.410855    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.410861    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.410865    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.411473    9989 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 19:48:16.411482    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.411486    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.411489    9989 round_trippers.go:580]     Content-Length: 263
	I0610 19:48:16.411493    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:16 GMT
	I0610 19:48:16.411496    9989 round_trippers.go:580]     Audit-Id: 9e18606b-4bce-473d-8045-05f615ea3c0b
	I0610 19:48:16.411499    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.411502    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.411504    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.411534    9989 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 19:48:16.411563    9989 api_server.go:141] control plane version: v1.30.1
	I0610 19:48:16.411571    9989 api_server.go:131] duration metric: took 3.713676ms to wait for apiserver health ...
	I0610 19:48:16.411576    9989 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 19:48:16.593917    9989 request.go:629] Waited for 182.303257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593969    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.593982    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.594020    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.594030    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.598338    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.598347    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.598352    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.598356    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.598359    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.598362    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.598366    9989 round_trippers.go:580]     Audit-Id: 739ff66b-4603-4a26-9ed9-1936484cf2df
	I0610 19:48:16.598369    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.598986    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:16.600809    9989 system_pods.go:59] 12 kube-system pods found
	I0610 19:48:16.600820    9989 system_pods.go:61] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:16.600824    9989 system_pods.go:61] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:16.600827    9989 system_pods.go:61] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:16.600829    9989 system_pods.go:61] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:16.600832    9989 system_pods.go:61] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:16.600835    9989 system_pods.go:61] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:16.600838    9989 system_pods.go:61] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:16.600841    9989 system_pods.go:61] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:16.600843    9989 system_pods.go:61] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:16.600846    9989 system_pods.go:61] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:16.600849    9989 system_pods.go:61] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:16.600851    9989 system_pods.go:61] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:16.600856    9989 system_pods.go:74] duration metric: took 189.281493ms to wait for pod list to return data ...
	I0610 19:48:16.600861    9989 default_sa.go:34] waiting for default service account to be created ...
	I0610 19:48:16.794887    9989 request.go:629] Waited for 193.957918ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794986    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/default/serviceaccounts
	I0610 19:48:16.794997    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.795009    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.795017    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.797833    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:16.797849    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.797856    9989 round_trippers.go:580]     Content-Length: 261
	I0610 19:48:16.797860    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.797863    9989 round_trippers.go:580]     Audit-Id: a5fbe232-e1a9-4892-a78a-2013b453a7c8
	I0610 19:48:16.797870    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.797873    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.797878    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.797881    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.797896    9989 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"809c40cb-86f1-483d-98cc-1b46432644d5","resourceVersion":"323","creationTimestamp":"2024-06-11T02:40:31Z"}}]}
	I0610 19:48:16.798039    9989 default_sa.go:45] found service account: "default"
	I0610 19:48:16.798051    9989 default_sa.go:55] duration metric: took 197.191772ms for default service account to be created ...
	I0610 19:48:16.798058    9989 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 19:48:16.994131    9989 request.go:629] Waited for 196.005872ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994194    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/namespaces/kube-system/pods
	I0610 19:48:16.994203    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:16.994251    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:16.994262    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:16.998793    9989 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 19:48:16.998811    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:16.998819    9989 round_trippers.go:580]     Audit-Id: 3a3c6305-a6bc-4dd6-990c-e7f5db70738f
	I0610 19:48:16.998824    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:16.998829    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:16.998845    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:16.998850    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:16.998853    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:16.999210    9989 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-x984g","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"b2354103-bb58-4679-869f-a2ada1414513","resourceVersion":"939","creationTimestamp":"2024-06-11T02:40:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-06-11T02:40:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"96aecf97-8fa9-4024-b4d8-5a87f88d0cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 86435 chars]
	I0610 19:48:17.001028    9989 system_pods.go:86] 12 kube-system pods found
	I0610 19:48:17.001039    9989 system_pods.go:89] "coredns-7db6d8ff4d-x984g" [b2354103-bb58-4679-869f-a2ada1414513] Running
	I0610 19:48:17.001043    9989 system_pods.go:89] "etcd-multinode-353000" [c0357ac6-e0e4-4275-8069-a75feabf5d34] Running
	I0610 19:48:17.001047    9989 system_pods.go:89] "kindnet-8mqj8" [f442b910-83c7-4b1a-91cd-a8dfd7dc15c0] Running
	I0610 19:48:17.001050    9989 system_pods.go:89] "kindnet-j4h99" [8bc56489-504a-4af4-9ce6-f68a2c25e867] Running
	I0610 19:48:17.001054    9989 system_pods.go:89] "kindnet-mcx2t" [87889817-69d4-4e38-8da9-ec63f8ec0411] Running
	I0610 19:48:17.001057    9989 system_pods.go:89] "kube-apiserver-multinode-353000" [10a38dbe-c328-4da3-b21c-efb415707889] Running
	I0610 19:48:17.001062    9989 system_pods.go:89] "kube-controller-manager-multinode-353000" [a8abe47a-46b7-414f-af2b-d13ea768b0f3] Running
	I0610 19:48:17.001065    9989 system_pods.go:89] "kube-proxy-f6tzv" [22e7f1f1-ca20-45a1-8882-33dbab1cb5d1] Running
	I0610 19:48:17.001069    9989 system_pods.go:89] "kube-proxy-nz5rp" [8fd079c3-79d6-48f4-a419-3e75e3535a7d] Running
	I0610 19:48:17.001072    9989 system_pods.go:89] "kube-proxy-v7s4q" [facfe7a3-8b6b-4328-b0ce-de6504ad189e] Running
	I0610 19:48:17.001076    9989 system_pods.go:89] "kube-scheduler-multinode-353000" [8fce8cdd-f6c1-4350-93fe-050f169721bb] Running
	I0610 19:48:17.001079    9989 system_pods.go:89] "storage-provisioner" [95aa7c05-392e-49d4-8604-12400011c22b] Running
	I0610 19:48:17.001084    9989 system_pods.go:126] duration metric: took 203.027203ms to wait for k8s-apps to be running ...
	I0610 19:48:17.001090    9989 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 19:48:17.001139    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:48:17.012670    9989 system_svc.go:56] duration metric: took 11.575591ms WaitForService to wait for kubelet
	I0610 19:48:17.012687    9989 kubeadm.go:576] duration metric: took 13.637116157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 19:48:17.012699    9989 node_conditions.go:102] verifying NodePressure condition ...
	I0610 19:48:17.194231    9989 request.go:629] Waited for 181.491134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194340    9989 round_trippers.go:463] GET https://192.169.0.19:8443/api/v1/nodes
	I0610 19:48:17.194351    9989 round_trippers.go:469] Request Headers:
	I0610 19:48:17.194363    9989 round_trippers.go:473]     Accept: application/json, */*
	I0610 19:48:17.194370    9989 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0610 19:48:17.197119    9989 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 19:48:17.197137    9989 round_trippers.go:577] Response Headers:
	I0610 19:48:17.197149    9989 round_trippers.go:580]     Content-Type: application/json
	I0610 19:48:17.197156    9989 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 55fa1ce6-0a03-4538-8976-51c545348d1d
	I0610 19:48:17.197162    9989 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0d4bc33-7cc4-46fb-bed6-a900f14cc1de
	I0610 19:48:17.197169    9989 round_trippers.go:580]     Date: Tue, 11 Jun 2024 02:48:17 GMT
	I0610 19:48:17.197176    9989 round_trippers.go:580]     Audit-Id: d3d91bd9-0b1c-4a20-9ebb-04b5962cdbc6
	I0610 19:48:17.197183    9989 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 19:48:17.197758    9989 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"958"},"items":[{"metadata":{"name":"multinode-353000","uid":"a83c4ca3-34a8-4b86-a93f-a6d33c8c6dbb","resourceVersion":"928","creationTimestamp":"2024-06-11T02:40:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-353000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b8e7e33180e1f47cc83cca2e1a263af6c57df959","minikube.k8s.io/name":"multinode-353000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_06_10T19_40_17_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 15445 chars]
	I0610 19:48:17.198317    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198329    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198338    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198342    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198348    9989 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 19:48:17.198354    9989 node_conditions.go:123] node cpu capacity is 2
	I0610 19:48:17.198359    9989 node_conditions.go:105] duration metric: took 185.662539ms to run NodePressure ...
	I0610 19:48:17.198370    9989 start.go:240] waiting for startup goroutines ...
	I0610 19:48:17.198378    9989 start.go:245] waiting for cluster config update ...
	I0610 19:48:17.198401    9989 start.go:254] writing updated cluster config ...
	I0610 19:48:17.220816    9989 out.go:177] 
	I0610 19:48:17.242724    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:17.242860    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.265195    9989 out.go:177] * Starting "multinode-353000-m02" worker node in "multinode-353000" cluster
	I0610 19:48:17.307293    9989 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 19:48:17.307327    9989 cache.go:56] Caching tarball of preloaded images
	I0610 19:48:17.307547    9989 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 19:48:17.307565    9989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 19:48:17.307689    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.308695    9989 start.go:360] acquireMachinesLock for multinode-353000-m02: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 19:48:17.308814    9989 start.go:364] duration metric: took 94.629µs to acquireMachinesLock for "multinode-353000-m02"
	I0610 19:48:17.308843    9989 start.go:96] Skipping create...Using existing machine configuration
	I0610 19:48:17.308851    9989 fix.go:54] fixHost starting: m02
	I0610 19:48:17.309302    9989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:48:17.309340    9989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:48:17.318771    9989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53805
	I0610 19:48:17.319159    9989 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:48:17.319519    9989 main.go:141] libmachine: Using API Version  1
	I0610 19:48:17.319536    9989 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:48:17.319731    9989 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:48:17.319893    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.319997    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:48:17.320076    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.320165    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:48:17.321139    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.321165    9989 fix.go:112] recreateIfNeeded on multinode-353000-m02: state=Stopped err=<nil>
	I0610 19:48:17.321176    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	W0610 19:48:17.321267    9989 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 19:48:17.342117    9989 out.go:177] * Restarting existing hyperkit VM for "multinode-353000-m02" ...
	I0610 19:48:17.384293    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .Start
	I0610 19:48:17.384586    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.384618    9989 main.go:141] libmachine: (multinode-353000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid
	I0610 19:48:17.386481    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 9545 missing from process table
	I0610 19:48:17.386504    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | pid 9545 is in state "Stopped"
	I0610 19:48:17.386538    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid...
	I0610 19:48:17.386916    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Using UUID 3b15a703-00dc-45e7-88e9-620fa037ae16
	I0610 19:48:17.404856    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Generated MAC 9a:45:71:59:94:c9
	I0610 19:48:17.404885    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000
	I0610 19:48:17.405069    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405097    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3b15a703-00dc-45e7-88e9-620fa037ae16", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b3560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0610 19:48:17.405170    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3b15a703-00dc-45e7-88e9-620fa037ae16", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/j
enkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"}
	I0610 19:48:17.405218    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3b15a703-00dc-45e7-88e9-620fa037ae16 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/multinode-353000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/mult
inode-353000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-353000"
	I0610 19:48:17.405234    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 19:48:17.406727    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 DEBUG: hyperkit: Pid is 10028
	I0610 19:48:17.407115    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Attempt 0
	I0610 19:48:17.407129    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:48:17.407257    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 10028
	I0610 19:48:17.409351    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Searching for 9a:45:71:59:94:c9 in /var/db/dhcpd_leases ...
	I0610 19:48:17.409467    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0610 19:48:17.409488    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6e:10:a7:68:76:8c ID:1,6e:10:a7:68:76:8c Lease:0x66690bdc}
	I0610 19:48:17.409512    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:fe:8b:79:f3:b9:7 ID:1,fe:8b:79:f3:b9:7 Lease:0x66690b49}
	I0610 19:48:17.409523    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:9a:45:71:59:94:c9 ID:1,9a:45:71:59:94:c9 Lease:0x66690ab4}
	I0610 19:48:17.409543    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | Found match: 9a:45:71:59:94:c9
	I0610 19:48:17.409570    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | IP: 192.169.0.20
	I0610 19:48:17.409579    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetConfigRaw
	I0610 19:48:17.410301    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:17.410512    9989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/multinode-353000/config.json ...
	I0610 19:48:17.410985    9989 machine.go:94] provisionDockerMachine start ...
	I0610 19:48:17.410995    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:17.411096    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:17.411190    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:17.411313    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411449    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:17.411555    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:17.411688    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:17.411842    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:17.411849    9989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 19:48:17.415070    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 19:48:17.423513    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 19:48:17.424462    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.424485    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.424494    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.424500    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.810455    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 19:48:17.810477    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 19:48:17.925056    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 19:48:17.925078    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 19:48:17.925090    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 19:48:17.925102    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 19:48:17.925970    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 19:48:17.925981    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 19:48:23.237466    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 19:48:23.237549    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 19:48:23.237560    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 19:48:23.261554    9989 main.go:141] libmachine: (multinode-353000-m02) DBG | 2024/06/10 19:48:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 19:48:52.481015    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 19:48:52.481029    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481167    9989 buildroot.go:166] provisioning hostname "multinode-353000-m02"
	I0610 19:48:52.481180    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.481288    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.481384    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.481465    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481540    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.481624    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.481764    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.481913    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.481922    9989 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-353000-m02 && echo "multinode-353000-m02" | sudo tee /etc/hostname
	I0610 19:48:52.555898    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-353000-m02
	
	I0610 19:48:52.555912    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.556047    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.556155    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556244    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.556351    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.556487    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.556669    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.556682    9989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-353000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-353000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-353000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 19:48:52.627006    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 19:48:52.627024    9989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 19:48:52.627038    9989 buildroot.go:174] setting up certificates
	I0610 19:48:52.627044    9989 provision.go:84] configureAuth start
	I0610 19:48:52.627052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetMachineName
	I0610 19:48:52.627185    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:52.627290    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.627382    9989 provision.go:143] copyHostCerts
	I0610 19:48:52.627410    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627456    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 19:48:52.627462    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 19:48:52.627594    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 19:48:52.627791    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627821    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 19:48:52.627825    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 19:48:52.627924    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 19:48:52.628081    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628109    9989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 19:48:52.628113    9989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 19:48:52.628206    9989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 19:48:52.628383    9989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.multinode-353000-m02 san=[127.0.0.1 192.169.0.20 localhost minikube multinode-353000-m02]
	I0610 19:48:52.864621    9989 provision.go:177] copyRemoteCerts
	I0610 19:48:52.864670    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 19:48:52.864684    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.864871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.865093    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.865223    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.865370    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:52.902301    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 19:48:52.902374    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 19:48:52.922200    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 19:48:52.922272    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 19:48:52.942419    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 19:48:52.942486    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 19:48:52.961961    9989 provision.go:87] duration metric: took 334.921541ms to configureAuth
	I0610 19:48:52.961973    9989 buildroot.go:189] setting minikube options for container-runtime
	I0610 19:48:52.962132    9989 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:48:52.962145    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:52.962271    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:52.962375    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:52.962471    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962561    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:52.962649    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:52.962765    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:52.962891    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:52.962899    9989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 19:48:53.026409    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 19:48:53.026421    9989 buildroot.go:70] root file system type: tmpfs
	I0610 19:48:53.026513    9989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 19:48:53.026532    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.026664    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.026757    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026854    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.026936    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.027075    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.027217    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.027260    9989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.19"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 19:48:53.101854    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.19
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 19:48:53.101871    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:53.102004    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:53.102084    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102159    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:53.102254    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:53.102385    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:53.102564    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:53.102577    9989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 19:48:54.746316    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 19:48:54.746329    9989 machine.go:97] duration metric: took 37.336632265s to provisionDockerMachine
	I0610 19:48:54.746338    9989 start.go:293] postStartSetup for "multinode-353000-m02" (driver="hyperkit")
	I0610 19:48:54.746346    9989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 19:48:54.746364    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.746553    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 19:48:54.746573    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.746671    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.746768    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.746849    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.746924    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.784393    9989 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 19:48:54.787362    9989 command_runner.go:130] > NAME=Buildroot
	I0610 19:48:54.787371    9989 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 19:48:54.787375    9989 command_runner.go:130] > ID=buildroot
	I0610 19:48:54.787379    9989 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 19:48:54.787385    9989 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 19:48:54.787467    9989 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 19:48:54.787474    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 19:48:54.787570    9989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 19:48:54.787737    9989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 19:48:54.787743    9989 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> /etc/ssl/certs/64852.pem
	I0610 19:48:54.787933    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 19:48:54.795249    9989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 19:48:54.815317    9989 start.go:296] duration metric: took 68.971403ms for postStartSetup
	I0610 19:48:54.815337    9989 fix.go:56] duration metric: took 37.507788969s for fixHost
	I0610 19:48:54.815352    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.815497    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.815593    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815691    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.815780    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.815896    9989 main.go:141] libmachine: Using SSH client type: native
	I0610 19:48:54.816039    9989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x755bf00] 0x755ec60 <nil>  [] 0s} 192.169.0.20 22 <nil> <nil>}
	I0610 19:48:54.816046    9989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 19:48:54.878000    9989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718074135.243306878
	
	I0610 19:48:54.878010    9989 fix.go:216] guest clock: 1718074135.243306878
	I0610 19:48:54.878017    9989 fix.go:229] Guest: 2024-06-10 19:48:55.243306878 -0700 PDT Remote: 2024-06-10 19:48:54.815342 -0700 PDT m=+195.166531099 (delta=427.964878ms)
	I0610 19:48:54.878027    9989 fix.go:200] guest clock delta is within tolerance: 427.964878ms
	I0610 19:48:54.878031    9989 start.go:83] releasing machines lock for "multinode-353000-m02", held for 37.570510595s
	I0610 19:48:54.878052    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.878188    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:48:54.899842    9989 out.go:177] * Found network options:
	I0610 19:48:54.920775    9989 out.go:177]   - NO_PROXY=192.169.0.19
	W0610 19:48:54.941666    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.941707    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942405    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942613    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:48:54.942729    9989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 19:48:54.942761    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	W0610 19:48:54.942841    9989 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 19:48:54.942952    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.942957    9989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 19:48:54.942979    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:48:54.943187    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943226    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:48:54.943428    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:48:54.943489    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943627    9989 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:48:54.943669    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.943798    9989 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:48:54.979160    9989 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 19:48:54.979221    9989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 19:48:54.979276    9989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 19:48:55.024346    9989 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 19:48:55.024519    9989 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0610 19:48:55.024548    9989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 19:48:55.024558    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.024672    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.039727    9989 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0610 19:48:55.039987    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 19:48:55.049027    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 19:48:55.058181    9989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.058230    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 19:48:55.067256    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.076291    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 19:48:55.085310    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 19:48:55.094333    9989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 19:48:55.103537    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 19:48:55.112676    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 19:48:55.121615    9989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 19:48:55.130814    9989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 19:48:55.139162    9989 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 19:48:55.139338    9989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 19:48:55.147700    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.246020    9989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 19:48:55.266428    9989 start.go:494] detecting cgroup driver to use...
	I0610 19:48:55.266504    9989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 19:48:55.279486    9989 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0610 19:48:55.279959    9989 command_runner.go:130] > [Unit]
	I0610 19:48:55.279969    9989 command_runner.go:130] > Description=Docker Application Container Engine
	I0610 19:48:55.279974    9989 command_runner.go:130] > Documentation=https://docs.docker.com
	I0610 19:48:55.279987    9989 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0610 19:48:55.279992    9989 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0610 19:48:55.279996    9989 command_runner.go:130] > StartLimitBurst=3
	I0610 19:48:55.280000    9989 command_runner.go:130] > StartLimitIntervalSec=60
	I0610 19:48:55.280003    9989 command_runner.go:130] > [Service]
	I0610 19:48:55.280006    9989 command_runner.go:130] > Type=notify
	I0610 19:48:55.280014    9989 command_runner.go:130] > Restart=on-failure
	I0610 19:48:55.280019    9989 command_runner.go:130] > Environment=NO_PROXY=192.169.0.19
	I0610 19:48:55.280025    9989 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0610 19:48:55.280036    9989 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0610 19:48:55.280044    9989 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0610 19:48:55.280049    9989 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0610 19:48:55.280056    9989 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0610 19:48:55.280061    9989 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0610 19:48:55.280067    9989 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0610 19:48:55.280078    9989 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0610 19:48:55.280085    9989 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0610 19:48:55.280088    9989 command_runner.go:130] > ExecStart=
	I0610 19:48:55.280100    9989 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0610 19:48:55.280104    9989 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0610 19:48:55.280112    9989 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0610 19:48:55.280118    9989 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0610 19:48:55.280122    9989 command_runner.go:130] > LimitNOFILE=infinity
	I0610 19:48:55.280124    9989 command_runner.go:130] > LimitNPROC=infinity
	I0610 19:48:55.280128    9989 command_runner.go:130] > LimitCORE=infinity
	I0610 19:48:55.280136    9989 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0610 19:48:55.280141    9989 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0610 19:48:55.280145    9989 command_runner.go:130] > TasksMax=infinity
	I0610 19:48:55.280149    9989 command_runner.go:130] > TimeoutStartSec=0
	I0610 19:48:55.280154    9989 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0610 19:48:55.280158    9989 command_runner.go:130] > Delegate=yes
	I0610 19:48:55.280163    9989 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0610 19:48:55.280170    9989 command_runner.go:130] > KillMode=process
	I0610 19:48:55.280175    9989 command_runner.go:130] > [Install]
	I0610 19:48:55.280181    9989 command_runner.go:130] > WantedBy=multi-user.target
	I0610 19:48:55.280416    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.297490    9989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 19:48:55.315143    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 19:48:55.326478    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.337749    9989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 19:48:55.355043    9989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 19:48:55.365212    9989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 19:48:55.380927    9989 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0610 19:48:55.381306    9989 ssh_runner.go:195] Run: which cri-dockerd
	I0610 19:48:55.384049    9989 command_runner.go:130] > /usr/bin/cri-dockerd
	I0610 19:48:55.384254    9989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 19:48:55.391544    9989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 19:48:55.404989    9989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 19:48:55.503276    9989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 19:48:55.597218    9989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 19:48:55.597255    9989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 19:48:55.612389    9989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 19:48:55.702999    9989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 19:49:56.756006    9989 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0610 19:49:56.756023    9989 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0610 19:49:56.756031    9989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.055138149s)
	I0610 19:49:56.756087    9989 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 19:49:56.764935    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.764947    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	I0610 19:49:56.764956    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	I0610 19:49:56.764968    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0610 19:49:56.764978    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	I0610 19:49:56.764989    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0610 19:49:56.765000    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765011    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0610 19:49:56.765020    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765044    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765058    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765082    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765093    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765103    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0610 19:49:56.765113    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765122    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765131    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765146    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765155    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0610 19:49:56.765181    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0610 19:49:56.765190    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0610 19:49:56.765199    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0610 19:49:56.765208    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0610 19:49:56.765218    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	I0610 19:49:56.765227    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0610 19:49:56.765235    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0610 19:49:56.765246    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0610 19:49:56.765255    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0610 19:49:56.765264    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0610 19:49:56.765273    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0610 19:49:56.765282    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765291    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0610 19:49:56.765300    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0610 19:49:56.765308    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0610 19:49:56.765318    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765327    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765336    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765345    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765356    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765365    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765391    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765402    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0610 19:49:56.765411    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765420    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765435    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765443    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765452    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765460    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765469    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765478    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765487    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765497    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765505    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765514    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765523    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765532    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0610 19:49:56.765540    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765549    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765558    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765568    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765580    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0610 19:49:56.765838    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0610 19:49:56.765857    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0610 19:49:56.765870    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0610 19:49:56.765878    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0610 19:49:56.765888    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	I0610 19:49:56.765896    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0610 19:49:56.765905    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0610 19:49:56.765913    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0610 19:49:56.765921    9989 command_runner.go:130] > Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	I0610 19:49:56.765929    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0610 19:49:56.765936    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	I0610 19:49:56.765949    9989 command_runner.go:130] > Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0610 19:49:56.765956    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	I0610 19:49:56.765966    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	I0610 19:49:56.765973    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	I0610 19:49:56.765980    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	I0610 19:49:56.765987    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	I0610 19:49:56.765993    9989 command_runner.go:130] > Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	I0610 19:49:56.765998    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0610 19:49:56.766006    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	I0610 19:49:56.766015    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0610 19:49:56.766026    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	I0610 19:49:56.766038    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0610 19:49:56.766047    9989 command_runner.go:130] > Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0610 19:49:56.766063    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0610 19:49:56.766074    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0610 19:49:56.766107    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0610 19:49:56.766115    9989 command_runner.go:130] > Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	I0610 19:49:56.766124    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0610 19:49:56.766133    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0610 19:49:56.766140    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0610 19:49:56.766146    9989 command_runner.go:130] > Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0610 19:49:56.790586    9989 out.go:177] 
	W0610 19:49:56.812421    9989 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 02:48:52 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612183250Z" level=info msg="Starting up"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.612906581Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:52.617473515Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.630323995Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643902885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643933442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.643976383Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644009351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644047000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644059822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644176217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644214688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644229937Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644237984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644266463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.644400520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646267084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646303704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646415855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646452940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646480959Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646495060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.646503183Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647603717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647649922Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647709442Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647723324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647737931Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.647841957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648038111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648135126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648169132Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648180244Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648190649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648202647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648212879Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648224537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648234781Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648242925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648250880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648261751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648282723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648293973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648303945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648314662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648322872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648330832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648339925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648348318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648356938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648366146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648373534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648380879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648388700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648402573Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648447168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648458515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648465980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648510114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648549025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648561678Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648576438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648759361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648780904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648790633Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.648977257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649037003Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649063662Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 02:48:52 multinode-353000-m02 dockerd[519]: time="2024-06-11T02:48:52.649102414Z" level=info msg="containerd successfully booted in 0.020335s"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.635454656Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.644320232Z" level=info msg="Loading containers: start."
	Jun 11 02:48:53 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:53.828537347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.050215042Z" level=info msg="Loading containers: done."
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090688149Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.090865249Z" level=info msg="Daemon has completed initialization"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110222842Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 02:48:55 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:55.110385806Z" level=info msg="API listen on [::]:2376"
	Jun 11 02:48:55 multinode-353000-m02 systemd[1]: Started Docker Application Container Engine.
	Jun 11 02:48:56 multinode-353000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.080086973Z" level=info msg="Processing signal 'terminated'"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081325196Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081585070Z" level=info msg="Daemon shutdown complete"
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081639222Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 02:48:56 multinode-353000-m02 dockerd[513]: time="2024-06-11T02:48:56.081652859Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 02:48:57 multinode-353000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 02:48:57 multinode-353000-m02 dockerd[805]: time="2024-06-11T02:48:57.133458901Z" level=info msg="Starting up"
	Jun 11 02:49:57 multinode-353000-m02 dockerd[805]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 02:49:57 multinode-353000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 19:49:56.812533    9989 out.go:239] * 
	W0610 19:49:56.813811    9989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 19:49:56.877394    9989 out.go:177] 
	
	
	==> Docker <==
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862187389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862199605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.862786603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.960767168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.960985026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.961000004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:08 multinode-353000 dockerd[787]: time="2024-06-11T02:48:08.965728902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 cri-dockerd[1001]: time="2024-06-11T02:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bbe0ba4f26fa092aabac2dd15236185366045b7fe696deb8ca62e57cf21bba64/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 11 02:48:09 multinode-353000 cri-dockerd[1001]: time="2024-06-11T02:48:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e6d5e599ec17df742f5e6d8e8e063567cfce9334498434e4e9a9f94d2f0385da/resolv.conf as [nameserver 192.169.0.1]"
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129737312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129798261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.129895010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.130045927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194027743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194077210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194088239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:09 multinode-353000 dockerd[787]: time="2024-06-11T02:48:09.194261585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:31 multinode-353000 dockerd[781]: time="2024-06-11T02:48:31.767548453Z" level=info msg="ignoring event" container=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767817666Z" level=info msg="shim disconnected" id=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 namespace=moby
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767906619Z" level=warning msg="cleaning up after shim disconnected" id=310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2 namespace=moby
	Jun 11 02:48:31 multinode-353000 dockerd[787]: time="2024-06-11T02:48:31.767915567Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134344966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134410534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134420430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 11 02:48:47 multinode-353000 dockerd[787]: time="2024-06-11T02:48:47.134480564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94827c43a9544       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   54b822818f491       storage-provisioner
	ccaa57ed742d0       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   e6d5e599ec17d       coredns-7db6d8ff4d-x984g
	a25c025ba395f       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   bbe0ba4f26fa0       busybox-fc5497c4f-4hdtl
	8adfed7dcc38a       ac1c61439df46                                                                                         4 minutes ago       Running             kindnet-cni               1                   65e9fb4a8551e       kindnet-j4h99
	26a1110268f56       747097150317f                                                                                         4 minutes ago       Running             kube-proxy                1                   31db7788c52d7       kube-proxy-v7s4q
	310a2ba1f3005       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   54b822818f491       storage-provisioner
	67aae91d2285d       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      1                   128719801fb28       etcd-multinode-353000
	5d4dc7f0171a8       a52dc94f0a912                                                                                         4 minutes ago       Running             kube-scheduler            1                   3bef980dc628a       kube-scheduler-multinode-353000
	18988fa5e4f48       91be940803172                                                                                         4 minutes ago       Running             kube-apiserver            1                   faa88b411f410       kube-apiserver-multinode-353000
	f7b4550455000       25a1387cdab82                                                                                         4 minutes ago       Running             kube-controller-manager   1                   1255cdadd4b54       kube-controller-manager-multinode-353000
	8c6ad13b3a78e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago      Exited              busybox                   0                   55c2b427ef24f       busybox-fc5497c4f-4hdtl
	deba067632e3e       cbb01a7bd410d                                                                                         11 minutes ago      Exited              coredns                   0                   5cbb1f2848836       coredns-7db6d8ff4d-x984g
	f854aa2e2bd31       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              11 minutes ago      Exited              kindnet-cni               0                   5e434eeac16fa       kindnet-j4h99
	1b251ec109bf4       747097150317f                                                                                         12 minutes ago      Exited              kube-proxy                0                   75aef0f938fa2       kube-proxy-v7s4q
	496239ba94592       3861cfcd7c04c                                                                                         12 minutes ago      Exited              etcd                      0                   4479d5328ed80       etcd-multinode-353000
	4f9c6abaf085e       a52dc94f0a912                                                                                         12 minutes ago      Exited              kube-scheduler            0                   2627ea28857a0       kube-scheduler-multinode-353000
	e847ea1ccea34       91be940803172                                                                                         12 minutes ago      Exited              kube-apiserver            0                   4a744abd670d4       kube-apiserver-multinode-353000
	254a0e0afe628       25a1387cdab82                                                                                         12 minutes ago      Exited              kube-controller-manager   0                   0e7e3b74d4e98       kube-controller-manager-multinode-353000
	
	
	==> coredns [ccaa57ed742d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54720 - 29707 "HINFO IN 3370124570245195731.7845949665974998901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010017697s
	
	
	==> coredns [deba067632e3] <==
	[INFO] 10.244.1.2:54969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000067018s
	[INFO] 10.244.1.2:38029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071562s
	[INFO] 10.244.1.2:34326 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056229s
	[INFO] 10.244.1.2:53072 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077454s
	[INFO] 10.244.1.2:42751 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106879s
	[INFO] 10.244.1.2:35314 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070499s
	[INFO] 10.244.1.2:47905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037641s
	[INFO] 10.244.0.3:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080705s
	[INFO] 10.244.0.3:57627 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107863s
	[INFO] 10.244.0.3:35475 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031072s
	[INFO] 10.244.0.3:43687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098542s
	[INFO] 10.244.1.2:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087221s
	[INFO] 10.244.1.2:53832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000099684s
	[INFO] 10.244.1.2:48880 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068665s
	[INFO] 10.244.1.2:45968 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057536s
	[INFO] 10.244.0.3:58843 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096021s
	[INFO] 10.244.0.3:32849 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001271s
	[INFO] 10.244.0.3:48661 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121766s
	[INFO] 10.244.0.3:42982 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000079089s
	[INFO] 10.244.1.2:53588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095171s
	[INFO] 10.244.1.2:51363 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00006577s
	[INFO] 10.244.1.2:50446 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069941s
	[INFO] 10.244.1.2:58279 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000137813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-353000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T19_40_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:40:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:52:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Jun 2024 02:48:05 +0000   Tue, 11 Jun 2024 02:48:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.19
	  Hostname:    multinode-353000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9b8a9458f2642adaf019d9b4b838fc8
	  System UUID:                f0e94315-0000-0000-ac08-1f17bf5837e0
	  Boot ID:                    6aadb9aa-f53f-46f8-8739-49ca8a404678
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hdtl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-x984g                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-multinode-353000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-j4h99                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-multinode-353000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-multinode-353000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-v7s4q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-multinode-353000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node multinode-353000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node multinode-353000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node multinode-353000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                    node-controller  Node multinode-353000 event: Registered Node multinode-353000 in Controller
	  Normal  NodeReady                11m                    kubelet          Node multinode-353000 status is now: NodeReady
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node multinode-353000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node multinode-353000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node multinode-353000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node multinode-353000 event: Registered Node multinode-353000 in Controller
	
	
	Name:               multinode-353000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-353000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-353000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T19_41_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Jun 2024 02:41:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-353000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Jun 2024 02:45:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 11 Jun 2024 02:42:06 +0000   Tue, 11 Jun 2024 02:48:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.20
	  Hostname:    multinode-353000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 32bb2f108a254471a31dc67f28f9d3d4
	  System UUID:                3b1545e7-0000-0000-88e9-620fa037ae16
	  Boot ID:                    38bf82fb-0b80-495c-b710-667d6f0da6a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.4
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fznn5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-mcx2t              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-nz5rp           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node multinode-353000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node multinode-353000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node multinode-353000-m02 event: Registered Node multinode-353000-m02 in Controller
	  Normal  NodeReady                10m                kubelet          Node multinode-353000-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m21s              node-controller  Node multinode-353000-m02 event: Registered Node multinode-353000-m02 in Controller
	  Normal  NodeNotReady             3m41s              node-controller  Node multinode-353000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.341226] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007061] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.633037] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.245165] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.913210] systemd-fstab-generator[463]: Ignoring "noauto" option for root device
	[  +0.098315] systemd-fstab-generator[475]: Ignoring "noauto" option for root device
	[  +1.803072] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +0.064012] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.202234] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +0.110131] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +0.124925] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[Jun11 02:47] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.052268] kauditd_printk_skb: 117 callbacks suppressed
	[  +0.053153] systemd-fstab-generator[966]: Ignoring "noauto" option for root device
	[  +0.098575] systemd-fstab-generator[978]: Ignoring "noauto" option for root device
	[  +0.132187] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.403867] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +1.307475] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[Jun11 02:48] kauditd_printk_skb: 172 callbacks suppressed
	[  +2.395735] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +5.040891] kauditd_printk_skb: 70 callbacks suppressed
	[ +22.865342] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [496239ba9459] <==
	{"level":"info","ts":"2024-06-11T02:40:13.416849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became candidate at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.41688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgVoteResp from 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became leader at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.416895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 166c32860e8fd508 elected leader 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:40:13.420105Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"166c32860e8fd508","local-member-attributes":"{Name:multinode-353000 ClientURLs:[https://192.169.0.19:2379]}","request-path":"/0/members/166c32860e8fd508/attributes","cluster-id":"f10222c540877db9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-11T02:40:13.420141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.420334Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.420479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:40:13.422269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.19:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.42366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-11T02:40:13.426545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.426575Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-11T02:40:13.443729Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443804Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:40:13.443841Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:45:32.030377Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-11T02:45:32.030416Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-353000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"]}
	{"level":"warn","ts":"2024-06-11T02:45:32.030463Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.030528Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.057343Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.19:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-11T02:45:32.057367Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.19:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-11T02:45:32.057436Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"166c32860e8fd508","current-leader-member-id":"166c32860e8fd508"}
	{"level":"info","ts":"2024-06-11T02:45:32.058299Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:45:32.058389Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:45:32.058397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-353000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"]}
	
	
	==> etcd [67aae91d2285] <==
	{"level":"info","ts":"2024-06-11T02:47:58.075051Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f10222c540877db9","local-member-id":"166c32860e8fd508","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:47:58.075114Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-11T02:47:58.080222Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"166c32860e8fd508","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-11T02:47:58.080507Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.081545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.081606Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-11T02:47:58.082237Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-11T02:47:58.082665Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"166c32860e8fd508","initial-advertise-peer-urls":["https://192.169.0.19:2380"],"listen-peer-urls":["https://192.169.0.19:2380"],"advertise-client-urls":["https://192.169.0.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-11T02:47:58.083061Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-11T02:47:58.083578Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:47:58.083777Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.19:2380"}
	{"level":"info","ts":"2024-06-11T02:47:58.539957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.540002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.540209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgPreVoteResp from 166c32860e8fd508 at term 2"}
	{"level":"info","ts":"2024-06-11T02:47:58.54026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became candidate at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 received MsgVoteResp from 166c32860e8fd508 at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"166c32860e8fd508 became leader at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.540429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 166c32860e8fd508 elected leader 166c32860e8fd508 at term 3"}
	{"level":"info","ts":"2024-06-11T02:47:58.545874Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"166c32860e8fd508","local-member-attributes":"{Name:multinode-353000 ClientURLs:[https://192.169.0.19:2379]}","request-path":"/0/members/166c32860e8fd508/attributes","cluster-id":"f10222c540877db9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-11T02:47:58.546009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:47:58.545972Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-11T02:47:58.547719Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-11T02:47:58.550104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-11T02:47:58.551389Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-11T02:47:58.553594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.19:2379"}
	
	
	==> kernel <==
	 02:52:33 up 6 min,  0 users,  load average: 0.07, 0.08, 0.02
	Linux multinode-353000 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8adfed7dcc38] <==
	I0611 02:51:53.095906       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:51:53.096143       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:51:53.096229       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:52:03.099803       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:52:03.099837       1 main.go:227] handling current node
	I0611 02:52:03.099854       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:52:03.099860       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:52:03.099986       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:52:03.100020       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:52:13.108639       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:52:13.108782       1 main.go:227] handling current node
	I0611 02:52:13.108888       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:52:13.109010       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:52:13.109140       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:52:13.109238       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:52:23.113573       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:52:23.113607       1 main.go:227] handling current node
	I0611 02:52:23.113616       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:52:23.113620       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:52:23.113777       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:52:23.113889       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:52:33.117451       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:52:33.117485       1 main.go:227] handling current node
	I0611 02:52:33.117493       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:52:33.117497       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f854aa2e2bd3] <==
	I0611 02:44:46.374755       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:44:56.379765       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:44:56.379800       1 main.go:227] handling current node
	I0611 02:44:56.379809       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:44:56.379813       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:44:56.380004       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:44:56.380081       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:06.387267       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:06.387415       1 main.go:227] handling current node
	I0611 02:45:06.387438       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:06.387530       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:06.387707       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:06.387767       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:16.398174       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:16.398207       1 main.go:227] handling current node
	I0611 02:45:16.398215       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:16.398219       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:16.398282       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:16.398306       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	I0611 02:45:26.402961       1 main.go:223] Handling node with IPs: map[192.169.0.19:{}]
	I0611 02:45:26.403014       1 main.go:227] handling current node
	I0611 02:45:26.403023       1 main.go:223] Handling node with IPs: map[192.169.0.20:{}]
	I0611 02:45:26.403028       1 main.go:250] Node multinode-353000-m02 has CIDR [10.244.1.0/24] 
	I0611 02:45:26.403145       1 main.go:223] Handling node with IPs: map[192.169.0.21:{}]
	I0611 02:45:26.403174       1 main.go:250] Node multinode-353000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [18988fa5e4f4] <==
	I0611 02:47:59.908944       1 shared_informer.go:320] Caches are synced for configmaps
	I0611 02:47:59.909256       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0611 02:47:59.909519       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0611 02:47:59.909555       1 aggregator.go:165] initial CRD sync complete...
	I0611 02:47:59.909561       1 autoregister_controller.go:141] Starting autoregister controller
	I0611 02:47:59.909564       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0611 02:47:59.909568       1 cache.go:39] Caches are synced for autoregister controller
	I0611 02:47:59.912589       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0611 02:47:59.915817       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0611 02:47:59.916043       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0611 02:47:59.916367       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0611 02:47:59.916508       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0611 02:47:59.963218       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0611 02:47:59.963277       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0611 02:47:59.963852       1 policy_source.go:224] refreshing policies
	I0611 02:47:59.980820       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0611 02:48:00.814645       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0611 02:48:01.025076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.19]
	I0611 02:48:01.026199       1 controller.go:615] quota admission added evaluator for: endpoints
	I0611 02:48:01.031513       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0611 02:48:01.761603       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0611 02:48:01.928471       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0611 02:48:01.947406       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0611 02:48:01.991226       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0611 02:48:01.997090       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e847ea1ccea3] <==
	W0611 02:45:33.054541       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054753       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054897       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054965       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054485       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053684       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053702       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055039       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053718       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054788       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055162       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055246       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055342       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055398       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055476       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055630       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055255       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055686       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.054764       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.053658       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055278       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055325       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055866       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.055938       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0611 02:45:33.056162       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [254a0e0afe62] <==
	I0611 02:40:32.758858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.352606ms"
	I0611 02:40:32.759042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.362µs"
	I0611 02:40:40.910014       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.455µs"
	I0611 02:40:40.919760       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="36.148µs"
	I0611 02:40:41.128812       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0611 02:40:42.122795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.582µs"
	I0611 02:40:42.147670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.018989ms"
	I0611 02:40:42.147737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.798µs"
	I0611 02:41:05.726747       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m02\" does not exist"
	I0611 02:41:05.736926       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m02" podCIDRs=["10.244.1.0/24"]
	I0611 02:41:06.133872       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m02"
	I0611 02:41:48.707406       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:41:50.827299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.246398ms"
	I0611 02:41:50.836431       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.08559ms"
	I0611 02:41:50.836953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.263µs"
	I0611 02:41:53.908886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.755154ms"
	I0611 02:41:53.909672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.964µs"
	I0611 02:41:54.537772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.288076ms"
	I0611 02:41:54.537833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.558µs"
	I0611 02:42:19.344515       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-353000-m03\" does not exist"
	I0611 02:42:19.344568       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:42:19.349890       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-353000-m03" podCIDRs=["10.244.2.0/24"]
	I0611 02:42:21.151832       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-353000-m03"
	I0611 02:43:01.974195       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	I0611 02:43:51.177548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-353000-m02"
	
	
	==> kube-controller-manager [f7b455045500] <==
	I0611 02:48:12.863445       1 shared_informer.go:320] Caches are synced for persistent volume
	I0611 02:48:12.863718       1 shared_informer.go:320] Caches are synced for attach detach
	I0611 02:48:12.863935       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0611 02:48:12.863727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0611 02:48:12.863732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0611 02:48:12.863741       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0611 02:48:12.868674       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0611 02:48:12.870923       1 shared_informer.go:320] Caches are synced for daemon sets
	I0611 02:48:12.872724       1 shared_informer.go:320] Caches are synced for cronjob
	I0611 02:48:12.890364       1 shared_informer.go:320] Caches are synced for job
	I0611 02:48:12.918816       1 shared_informer.go:320] Caches are synced for disruption
	I0611 02:48:12.922504       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0611 02:48:12.992005       1 shared_informer.go:320] Caches are synced for deployment
	I0611 02:48:13.002177       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0611 02:48:13.002383       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.616µs"
	I0611 02:48:13.002398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.753µs"
	I0611 02:48:13.009936       1 shared_informer.go:320] Caches are synced for resource quota
	I0611 02:48:13.014332       1 shared_informer.go:320] Caches are synced for crt configmap
	I0611 02:48:13.059369       1 shared_informer.go:320] Caches are synced for resource quota
	I0611 02:48:13.074262       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0611 02:48:13.484894       1 shared_informer.go:320] Caches are synced for garbage collector
	I0611 02:48:13.489351       1 shared_informer.go:320] Caches are synced for garbage collector
	I0611 02:48:13.489486       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0611 02:48:52.871429       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.954301ms"
	I0611 02:48:52.871670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.1µs"
	
	
	==> kube-proxy [1b251ec109bf] <==
	I0611 02:40:32.780056       1 server_linux.go:69] "Using iptables proxy"
	I0611 02:40:32.794486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.19"]
	I0611 02:40:32.857420       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0611 02:40:32.857441       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0611 02:40:32.857452       1 server_linux.go:165] "Using iptables Proxier"
	I0611 02:40:32.859777       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0611 02:40:32.859889       1 server.go:872] "Version info" version="v1.30.1"
	I0611 02:40:32.859898       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:40:32.861522       1 config.go:192] "Starting service config controller"
	I0611 02:40:32.861557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0611 02:40:32.861607       1 config.go:101] "Starting endpoint slice config controller"
	I0611 02:40:32.861612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0611 02:40:32.862416       1 config.go:319] "Starting node config controller"
	I0611 02:40:32.862445       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0611 02:40:32.962479       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0611 02:40:32.962565       1 shared_informer.go:320] Caches are synced for service config
	I0611 02:40:32.969480       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [26a1110268f5] <==
	I0611 02:48:02.001653       1 server_linux.go:69] "Using iptables proxy"
	I0611 02:48:02.013979       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.19"]
	I0611 02:48:02.057499       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0611 02:48:02.057540       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0611 02:48:02.057555       1 server_linux.go:165] "Using iptables Proxier"
	I0611 02:48:02.059982       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0611 02:48:02.060269       1 server.go:872] "Version info" version="v1.30.1"
	I0611 02:48:02.060300       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:48:02.061760       1 config.go:192] "Starting service config controller"
	I0611 02:48:02.061875       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0611 02:48:02.061927       1 config.go:101] "Starting endpoint slice config controller"
	I0611 02:48:02.061950       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0611 02:48:02.062636       1 config.go:319] "Starting node config controller"
	I0611 02:48:02.062663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0611 02:48:02.162369       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0611 02:48:02.162444       1 shared_informer.go:320] Caches are synced for service config
	I0611 02:48:02.162680       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4f9c6abaf085] <==
	E0611 02:40:14.372584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:14.372745       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:14.372819       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.182489       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0611 02:40:15.182664       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:40:15.203927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0611 02:40:15.203983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0611 02:40:15.281257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0611 02:40:15.281362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0611 02:40:15.290251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0611 02:40:15.290425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0611 02:40:15.336462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.336589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.431159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0611 02:40:15.431203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0611 02:40:15.442927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.442968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.494146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.494219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0611 02:40:15.551457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0611 02:40:15.551500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0611 02:40:17.163038       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0611 02:45:32.082918       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0611 02:45:32.083248       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0611 02:45:32.083296       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5d4dc7f0171a] <==
	I0611 02:47:58.678119       1 serving.go:380] Generated self-signed cert in-memory
	W0611 02:47:59.868071       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0611 02:47:59.868111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0611 02:47:59.868235       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0611 02:47:59.868322       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0611 02:47:59.892253       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0611 02:47:59.892287       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0611 02:47:59.893518       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0611 02:47:59.893582       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0611 02:47:59.893744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0611 02:47:59.893978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0611 02:47:59.994411       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 11 02:48:05 multinode-353000 kubelet[1237]: I0611 02:48:05.272706    1237 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: I0611 02:48:32.544417    1237 scope.go:117] "RemoveContainer" containerID="130521568c691ad88511924448b027ea5017bb130505a8d01871828a60561d29"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: I0611 02:48:32.544879    1237 scope.go:117] "RemoveContainer" containerID="310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2"
	Jun 11 02:48:32 multinode-353000 kubelet[1237]: E0611 02:48:32.545019    1237 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(95aa7c05-392e-49d4-8604-12400011c22b)\"" pod="kube-system/storage-provisioner" podUID="95aa7c05-392e-49d4-8604-12400011c22b"
	Jun 11 02:48:47 multinode-353000 kubelet[1237]: I0611 02:48:47.085051    1237 scope.go:117] "RemoveContainer" containerID="310a2ba1f30059e258b7e668eb46dbabadbc5888b4032edfaf6d0cf89889aab2"
	Jun 11 02:48:57 multinode-353000 kubelet[1237]: E0611 02:48:57.099062    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:48:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:49:57 multinode-353000 kubelet[1237]: E0611 02:49:57.094570    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:49:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:50:57 multinode-353000 kubelet[1237]: E0611 02:50:57.094884    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:50:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:50:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:50:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:50:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 11 02:51:57 multinode-353000 kubelet[1237]: E0611 02:51:57.094582    1237 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 11 02:51:57 multinode-353000 kubelet[1237]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 11 02:51:57 multinode-353000 kubelet[1237]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 11 02:51:57 multinode-353000 kubelet[1237]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 11 02:51:57 multinode-353000 kubelet[1237]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-353000 -n multinode-353000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-353000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (154.23s)

                                                
                                    
x
+
TestScheduledStopUnix (307.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-545000 --memory=2048 --driver=hyperkit 
E0610 20:00:36.320274    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-545000 --memory=2048 --driver=hyperkit : signal: killed (5m0.005254695s)

                                                
                                                
-- stdout --
	* [scheduled-stop-545000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-545000" primary control-plane node in "scheduled-stop-545000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-545000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-545000" primary control-plane node in "scheduled-stop-545000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-06-10 20:05:12.857945 -0700 PDT m=+4387.020271077
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-545000 -n scheduled-stop-545000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-545000 -n scheduled-stop-545000: exit status 3 (2.024667277s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:05:14.880063   10573 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	E0610 20:05:14.880082   10573 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "scheduled-stop-545000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-545000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-545000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-545000: (5.317099158s)
--- FAIL: TestScheduledStopUnix (307.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (78.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-486000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:36:55.808718    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:37:34.474353    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:37:55.288766    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.295209    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.305936    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.326271    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.367481    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.449064    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.610356    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:55.932041    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:56.572815    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:37:57.854458    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:38:00.415738    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:38:05.536108    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-486000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.1: exit status 90 (1m17.909823533s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-486000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "default-k8s-diff-port-486000" primary control-plane node in "default-k8s-diff-port-486000" cluster
	* Restarting existing hyperkit VM for "default-k8s-diff-port-486000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 20:36:50.808389   14920 out.go:291] Setting OutFile to fd 1 ...
	I0610 20:36:50.808562   14920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 20:36:50.808568   14920 out.go:304] Setting ErrFile to fd 2...
	I0610 20:36:50.808571   14920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 20:36:50.808745   14920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 20:36:50.810882   14920 out.go:298] Setting JSON to false
	I0610 20:36:50.833145   14920 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":29166,"bootTime":1718047844,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 20:36:50.833235   14920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 20:36:50.855487   14920 out.go:177] * [default-k8s-diff-port-486000] minikube v1.33.1 on Darwin 14.4.1
	I0610 20:36:50.919969   14920 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 20:36:50.898244   14920 notify.go:220] Checking for updates...
	I0610 20:36:50.940926   14920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 20:36:50.962907   14920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 20:36:50.984280   14920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 20:36:51.005131   14920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 20:36:51.025892   14920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 20:36:51.047313   14920 config.go:182] Loaded profile config "default-k8s-diff-port-486000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 20:36:51.047666   14920 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:36:51.047707   14920 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:36:51.056663   14920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58951
	I0610 20:36:51.057031   14920 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:36:51.057479   14920 main.go:141] libmachine: Using API Version  1
	I0610 20:36:51.057503   14920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:36:51.057734   14920 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:36:51.057911   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:36:51.058116   14920 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 20:36:51.058363   14920 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:36:51.058386   14920 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:36:51.067042   14920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58953
	I0610 20:36:51.067356   14920 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:36:51.067689   14920 main.go:141] libmachine: Using API Version  1
	I0610 20:36:51.067699   14920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:36:51.067923   14920 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:36:51.068021   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:36:51.097019   14920 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 20:36:51.139044   14920 start.go:297] selected driver: hyperkit
	I0610 20:36:51.139073   14920 start.go:901] validating driver "hyperkit" against &{Name:default-k8s-diff-port-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-486000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.50 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 20:36:51.139251   14920 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 20:36:51.143561   14920 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 20:36:51.143670   14920 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 20:36:51.152163   14920 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 20:36:51.155987   14920 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:36:51.156017   14920 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 20:36:51.156155   14920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 20:36:51.156222   14920 cni.go:84] Creating CNI manager for ""
	I0610 20:36:51.156238   14920 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 20:36:51.156272   14920 start.go:340] cluster config:
	{Name:default-k8s-diff-port-486000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-486000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.50 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 20:36:51.156373   14920 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 20:36:51.198825   14920 out.go:177] * Starting "default-k8s-diff-port-486000" primary control-plane node in "default-k8s-diff-port-486000" cluster
	I0610 20:36:51.220109   14920 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 20:36:51.220179   14920 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 20:36:51.220209   14920 cache.go:56] Caching tarball of preloaded images
	I0610 20:36:51.220406   14920 preload.go:173] Found /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0610 20:36:51.220426   14920 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 20:36:51.220573   14920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/default-k8s-diff-port-486000/config.json ...
	I0610 20:36:51.221512   14920 start.go:360] acquireMachinesLock for default-k8s-diff-port-486000: {Name:mkb49c28b47b51a1f649f8a2347c58a1e3abb012 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 20:36:51.221651   14920 start.go:364] duration metric: took 113.624µs to acquireMachinesLock for "default-k8s-diff-port-486000"
	I0610 20:36:51.221687   14920 start.go:96] Skipping create...Using existing machine configuration
	I0610 20:36:51.221708   14920 fix.go:54] fixHost starting: 
	I0610 20:36:51.222107   14920 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:36:51.222135   14920 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:36:51.231296   14920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58955
	I0610 20:36:51.231670   14920 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:36:51.231988   14920 main.go:141] libmachine: Using API Version  1
	I0610 20:36:51.231998   14920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:36:51.232254   14920 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:36:51.232372   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:36:51.232477   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetState
	I0610 20:36:51.232570   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 20:36:51.232641   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | hyperkit pid from json: 14849
	I0610 20:36:51.233734   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | hyperkit pid 14849 missing from process table
	I0610 20:36:51.233777   14920 fix.go:112] recreateIfNeeded on default-k8s-diff-port-486000: state=Stopped err=<nil>
	I0610 20:36:51.233805   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	W0610 20:36:51.233895   14920 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 20:36:51.275918   14920 out.go:177] * Restarting existing hyperkit VM for "default-k8s-diff-port-486000" ...
	I0610 20:36:51.299085   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .Start
	I0610 20:36:51.299293   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 20:36:51.299308   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/hyperkit.pid
	I0610 20:36:51.300629   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | hyperkit pid 14849 missing from process table
	I0610 20:36:51.300658   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | pid 14849 is in state "Stopped"
	I0610 20:36:51.300685   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/hyperkit.pid...
	I0610 20:36:51.300787   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Using UUID 1a90d347-30e1-4487-a3cd-ba4049d2190a
	I0610 20:36:51.318038   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Generated MAC 22:8b:b7:82:8b:fa
	I0610 20:36:51.318060   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-486000
	I0610 20:36:51.318190   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1a90d347-30e1-4487-a3cd-ba4049d2190a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000412480)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0610 20:36:51.318219   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1a90d347-30e1-4487-a3cd-ba4049d2190a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000412480)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0610 20:36:51.318279   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1a90d347-30e1-4487-a3cd-ba4049d2190a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/default-k8s-diff-port-486000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19046-5942/
.minikube/machines/default-k8s-diff-port-486000/bzimage,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-486000"}
	I0610 20:36:51.318330   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1a90d347-30e1-4487-a3cd-ba4049d2190a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/default-k8s-diff-port-486000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/tty,log=/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/console-ring -f kexec,/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/bzimage,/Users
/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-486000"
	I0610 20:36:51.318344   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0610 20:36:51.319807   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 DEBUG: hyperkit: Pid is 14931
	I0610 20:36:51.320265   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Attempt 0
	I0610 20:36:51.320299   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 20:36:51.320406   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | hyperkit pid from json: 14931
	I0610 20:36:51.322166   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Searching for 22:8b:b7:82:8b:fa in /var/db/dhcpd_leases ...
	I0610 20:36:51.322249   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Found 49 entries in /var/db/dhcpd_leases!
	I0610 20:36:51.322261   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:22:8b:b7:82:8b:fa ID:1,22:8b:b7:82:8b:fa Lease:0x66691792}
	I0610 20:36:51.322275   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | Found match: 22:8b:b7:82:8b:fa
	I0610 20:36:51.322282   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | IP: 192.169.0.50
	I0610 20:36:51.322374   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetConfigRaw
	I0610 20:36:51.323095   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetIP
	I0610 20:36:51.323264   14920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/default-k8s-diff-port-486000/config.json ...
	I0610 20:36:51.323718   14920 machine.go:94] provisionDockerMachine start ...
	I0610 20:36:51.323730   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:36:51.323851   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:36:51.323988   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:36:51.324109   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:36:51.324214   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:36:51.324341   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:36:51.324499   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:36:51.324692   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:36:51.324700   14920 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 20:36:51.327786   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0610 20:36:51.335896   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0610 20:36:51.336858   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 20:36:51.336873   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 20:36:51.336880   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 20:36:51.336890   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 20:36:51.720692   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0610 20:36:51.720709   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0610 20:36:51.835489   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0610 20:36:51.835510   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0610 20:36:51.835520   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0610 20:36:51.835529   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0610 20:36:51.836325   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0610 20:36:51.836335   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0610 20:36:57.135638   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:57 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0610 20:36:57.135731   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:57 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0610 20:36:57.135741   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:57 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0610 20:36:57.159459   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | 2024/06/10 20:36:57 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0610 20:37:04.486539   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 20:37:04.486553   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetMachineName
	I0610 20:37:04.486682   14920 buildroot.go:166] provisioning hostname "default-k8s-diff-port-486000"
	I0610 20:37:04.486691   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetMachineName
	I0610 20:37:04.486793   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.486881   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.486964   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.487055   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.487131   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.487264   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:04.487407   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:04.487419   14920 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-486000 && echo "default-k8s-diff-port-486000" | sudo tee /etc/hostname
	I0610 20:37:04.548239   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-486000
	
	I0610 20:37:04.548261   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.548397   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.548531   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.548654   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.548755   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.548880   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:04.549028   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:04.549041   14920 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-486000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-486000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-486000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 20:37:04.605125   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 20:37:04.605145   14920 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-5942/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-5942/.minikube}
	I0610 20:37:04.605163   14920 buildroot.go:174] setting up certificates
	I0610 20:37:04.605171   14920 provision.go:84] configureAuth start
	I0610 20:37:04.605178   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetMachineName
	I0610 20:37:04.605311   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetIP
	I0610 20:37:04.605409   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.605507   14920 provision.go:143] copyHostCerts
	I0610 20:37:04.605612   14920 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem, removing ...
	I0610 20:37:04.605622   14920 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem
	I0610 20:37:04.605770   14920 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/ca.pem (1082 bytes)
	I0610 20:37:04.606020   14920 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem, removing ...
	I0610 20:37:04.606027   14920 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem
	I0610 20:37:04.606107   14920 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/cert.pem (1123 bytes)
	I0610 20:37:04.606309   14920 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem, removing ...
	I0610 20:37:04.606316   14920 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem
	I0610 20:37:04.606392   14920 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-5942/.minikube/key.pem (1679 bytes)
	I0610 20:37:04.606566   14920 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-486000 san=[127.0.0.1 192.169.0.50 default-k8s-diff-port-486000 localhost minikube]
	I0610 20:37:04.680729   14920 provision.go:177] copyRemoteCerts
	I0610 20:37:04.680830   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 20:37:04.680849   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.681023   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.681165   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.681352   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.681452   14920 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/id_rsa Username:docker}
	I0610 20:37:04.714169   14920 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0610 20:37:04.734609   14920 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 20:37:04.754550   14920 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 20:37:04.774291   14920 provision.go:87] duration metric: took 169.103551ms to configureAuth
	I0610 20:37:04.774306   14920 buildroot.go:189] setting minikube options for container-runtime
	I0610 20:37:04.774449   14920 config.go:182] Loaded profile config "default-k8s-diff-port-486000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 20:37:04.774491   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:04.774626   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.774723   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.774804   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.774902   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.774990   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.775093   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:04.775226   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:04.775234   14920 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 20:37:04.825951   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 20:37:04.825962   14920 buildroot.go:70] root file system type: tmpfs
	I0610 20:37:04.826037   14920 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 20:37:04.826052   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.826182   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.826287   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.826380   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.826475   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.826596   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:04.826743   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:04.826788   14920 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 20:37:04.887159   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 20:37:04.887187   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:04.887319   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:04.887413   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.887513   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:04.887630   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:04.887757   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:04.887910   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:04.887923   14920 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 20:37:06.506497   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 20:37:06.506512   14920 machine.go:97] duration metric: took 15.182842794s to provisionDockerMachine
	I0610 20:37:06.506524   14920 start.go:293] postStartSetup for "default-k8s-diff-port-486000" (driver="hyperkit")
	I0610 20:37:06.506532   14920 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 20:37:06.506543   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:06.506731   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 20:37:06.506745   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:06.506864   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:06.506957   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:06.507037   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:06.507119   14920 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/id_rsa Username:docker}
	I0610 20:37:06.541638   14920 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 20:37:06.545170   14920 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 20:37:06.545183   14920 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/addons for local assets ...
	I0610 20:37:06.545290   14920 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-5942/.minikube/files for local assets ...
	I0610 20:37:06.545486   14920 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem -> 64852.pem in /etc/ssl/certs
	I0610 20:37:06.545694   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 20:37:06.554038   14920 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/ssl/certs/64852.pem --> /etc/ssl/certs/64852.pem (1708 bytes)
	I0610 20:37:06.591067   14920 start.go:296] duration metric: took 84.534084ms for postStartSetup
	I0610 20:37:06.591094   14920 fix.go:56] duration metric: took 15.369455023s for fixHost
	I0610 20:37:06.591141   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:06.591399   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:06.591612   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:06.591777   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:06.591946   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:06.592202   14920 main.go:141] libmachine: Using SSH client type: native
	I0610 20:37:06.592415   14920 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa3bcf00] 0xa3bfc60 <nil>  [] 0s} 192.169.0.50 22 <nil> <nil>}
	I0610 20:37:06.592423   14920 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 20:37:06.647003   14920 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718077026.952168444
	
	I0610 20:37:06.647016   14920 fix.go:216] guest clock: 1718077026.952168444
	I0610 20:37:06.647021   14920 fix.go:229] Guest: 2024-06-10 20:37:06.952168444 -0700 PDT Remote: 2024-06-10 20:37:06.591111 -0700 PDT m=+15.818668750 (delta=361.057444ms)
	I0610 20:37:06.647045   14920 fix.go:200] guest clock delta is within tolerance: 361.057444ms
	I0610 20:37:06.647049   14920 start.go:83] releasing machines lock for "default-k8s-diff-port-486000", held for 15.425445501s
	I0610 20:37:06.647070   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:06.647195   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetIP
	I0610 20:37:06.647290   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:06.647588   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:06.647706   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:37:06.647783   14920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 20:37:06.647816   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:06.647843   14920 ssh_runner.go:195] Run: cat /version.json
	I0610 20:37:06.647854   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:37:06.647912   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:06.647942   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:37:06.648011   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:06.648022   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:37:06.648099   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:06.648118   14920 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:37:06.648199   14920 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/id_rsa Username:docker}
	I0610 20:37:06.648207   14920 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/id_rsa Username:docker}
	I0610 20:37:06.726645   14920 ssh_runner.go:195] Run: systemctl --version
	I0610 20:37:06.732144   14920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 20:37:06.736411   14920 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 20:37:06.736456   14920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 20:37:06.748824   14920 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 20:37:06.748840   14920 start.go:494] detecting cgroup driver to use...
	I0610 20:37:06.748941   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 20:37:06.769090   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 20:37:06.777819   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 20:37:06.786140   14920 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 20:37:06.786189   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 20:37:06.794616   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 20:37:06.802898   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 20:37:06.811329   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 20:37:06.819617   14920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 20:37:06.828205   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 20:37:06.836966   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 20:37:06.845365   14920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 20:37:06.853917   14920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 20:37:06.861568   14920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 20:37:06.869125   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 20:37:06.964845   14920 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 20:37:06.984044   14920 start.go:494] detecting cgroup driver to use...
	I0610 20:37:06.984135   14920 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 20:37:07.002741   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 20:37:07.021767   14920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 20:37:07.038592   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 20:37:07.049725   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 20:37:07.060632   14920 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 20:37:07.083349   14920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 20:37:07.093861   14920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 20:37:07.109285   14920 ssh_runner.go:195] Run: which cri-dockerd
	I0610 20:37:07.112404   14920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 20:37:07.119620   14920 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 20:37:07.133451   14920 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 20:37:07.234019   14920 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 20:37:07.330786   14920 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 20:37:07.330933   14920 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 20:37:07.346493   14920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 20:37:07.456848   14920 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 20:38:08.498818   14920 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.042185144s)
	I0610 20:38:08.498890   14920 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0610 20:38:08.534967   14920 out.go:177] 
	W0610 20:38:08.556512   14920 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 03:37:05 default-k8s-diff-port-486000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507022834Z" level=info msg="Starting up"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507450410Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507985994Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=501
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.527636180Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543799609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543850276Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543892196Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543902584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543935399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543944516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544047326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544083215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544095122Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544102089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544127296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544207676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545799762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545837992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545950488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545983642Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546012540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546027782Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546035402Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547156580Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547243844Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547278829Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547290924Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547299826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547400342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547614797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547708357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547741469Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547752757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547761512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547780199Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547791365Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547800326Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547809360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547823004Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547838536Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547848200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547860920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547870324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547878433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547887087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547894745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547902954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547910316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547919000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547927575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547937541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547944915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547954547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547967297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547979506Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547993891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548002455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548010309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548053826Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548088398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548098566Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548106611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548367650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548379183Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548386104Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548524135Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548639668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548691449Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548715490Z" level=info msg="containerd successfully booted in 0.022046s"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.525629013Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.556349360Z" level=info msg="Loading containers: start."
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.719866226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.755146085Z" level=info msg="Loading containers: done."
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.791400474Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.791578269Z" level=info msg="Daemon has completed initialization"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.809335795Z" level=info msg="API listen on [::]:2376"
	Jun 11 03:37:06 default-k8s-diff-port-486000 systemd[1]: Started Docker Application Container Engine.
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.809399827Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.774328481Z" level=info msg="Processing signal 'terminated'"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775261697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775532873Z" level=info msg="Daemon shutdown complete"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775584883Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775599278Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 03:37:07 default-k8s-diff-port-486000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:37:08 default-k8s-diff-port-486000 dockerd[877]: time="2024-06-11T03:37:08.824417711Z" level=info msg="Starting up"
	Jun 11 03:38:08 default-k8s-diff-port-486000 dockerd[877]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jun 11 03:37:05 default-k8s-diff-port-486000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507022834Z" level=info msg="Starting up"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507450410Z" level=info msg="containerd not running, starting managed containerd"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:05.507985994Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=501
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.527636180Z" level=info msg="starting containerd" revision=3a4de459a68952ffb703bbe7f2290861a75b6b67 version=v1.7.17
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543799609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543850276Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543892196Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543902584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543935399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.543944516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544047326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544083215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544095122Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544102089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544127296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.544207676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545799762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545837992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545950488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.545983642Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546012540Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546027782Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.546035402Z" level=info msg="metadata content store policy set" policy=shared
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547156580Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547243844Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547278829Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547290924Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547299826Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547400342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547614797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547708357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547741469Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547752757Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547761512Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547780199Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547791365Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547800326Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547809360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547823004Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547838536Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547848200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547860920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547870324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547878433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547887087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547894745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547902954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547910316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547919000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547927575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547937541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547944915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547954547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547967297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547979506Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.547993891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548002455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548010309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548053826Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548088398Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548098566Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548106611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548367650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548379183Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548386104Z" level=info msg="NRI interface is disabled by configuration."
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548524135Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548639668Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548691449Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jun 11 03:37:05 default-k8s-diff-port-486000 dockerd[501]: time="2024-06-11T03:37:05.548715490Z" level=info msg="containerd successfully booted in 0.022046s"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.525629013Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.556349360Z" level=info msg="Loading containers: start."
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.719866226Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.755146085Z" level=info msg="Loading containers: done."
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.791400474Z" level=info msg="Docker daemon" commit=de5c9cf containerd-snapshotter=false storage-driver=overlay2 version=26.1.4
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.791578269Z" level=info msg="Daemon has completed initialization"
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.809335795Z" level=info msg="API listen on [::]:2376"
	Jun 11 03:37:06 default-k8s-diff-port-486000 systemd[1]: Started Docker Application Container Engine.
	Jun 11 03:37:06 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:06.809399827Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.774328481Z" level=info msg="Processing signal 'terminated'"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775261697Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775532873Z" level=info msg="Daemon shutdown complete"
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775584883Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 11 03:37:07 default-k8s-diff-port-486000 dockerd[495]: time="2024-06-11T03:37:07.775599278Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jun 11 03:37:07 default-k8s-diff-port-486000 systemd[1]: Stopping Docker Application Container Engine...
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Deactivated successfully.
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: Stopped Docker Application Container Engine.
	Jun 11 03:37:08 default-k8s-diff-port-486000 systemd[1]: Starting Docker Application Container Engine...
	Jun 11 03:37:08 default-k8s-diff-port-486000 dockerd[877]: time="2024-06-11T03:37:08.824417711Z" level=info msg="Starting up"
	Jun 11 03:38:08 default-k8s-diff-port-486000 dockerd[877]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jun 11 03:38:08 default-k8s-diff-port-486000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0610 20:38:08.556633   14920 out.go:239] * 
	* 
	W0610 20:38:08.558024   14920 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 20:38:08.641496   14920 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-486000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.1": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (144.675707ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:38:08.808031   14975 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (78.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-486000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (144.322563ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:38:08.952037   14980 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-486000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (37.195283ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-486000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-486000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (145.212564ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:38:09.135562   14986 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (59.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-486000 image list --format=json
E0610 20:38:15.776387    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:38:17.985857    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:17.992245    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.004381    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.025957    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.068141    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.150269    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.311574    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:18.632095    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:19.274279    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:20.555647    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:23.117276    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:24.407298    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:38:28.237642    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:36.257498    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
E0610 20:38:38.478367    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
E0610 20:38:56.893196    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:38:58.865619    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:38:58.960617    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
start_stop_delete_test.go:304: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-diff-port-486000 image list --format=json: (59.55502354s)
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (159.717272ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:39:08.848379   15034 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (59.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-486000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-486000 --alsologtostderr -v=1: exit status 80 (1.654794515s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-486000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 20:39:08.916796   15039 out.go:291] Setting OutFile to fd 1 ...
	I0610 20:39:08.917098   15039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 20:39:08.917103   15039 out.go:304] Setting ErrFile to fd 2...
	I0610 20:39:08.917107   15039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 20:39:08.917784   15039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 20:39:08.918444   15039 out.go:298] Setting JSON to false
	I0610 20:39:08.918473   15039 mustload.go:65] Loading cluster: default-k8s-diff-port-486000
	I0610 20:39:08.918742   15039 config.go:182] Loaded profile config "default-k8s-diff-port-486000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 20:39:08.919076   15039 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:39:08.919128   15039 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:39:08.927542   15039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59013
	I0610 20:39:08.927944   15039 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:39:08.928365   15039 main.go:141] libmachine: Using API Version  1
	I0610 20:39:08.928388   15039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:39:08.928618   15039 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:39:08.928738   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetState
	I0610 20:39:08.928843   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 20:39:08.928894   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) DBG | hyperkit pid from json: 14931
	I0610 20:39:08.929989   15039 host.go:66] Checking if "default-k8s-diff-port-486000" exists ...
	I0610 20:39:08.930234   15039 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:39:08.930256   15039 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:39:08.938704   15039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59015
	I0610 20:39:08.939056   15039 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:39:08.939371   15039 main.go:141] libmachine: Using API Version  1
	I0610 20:39:08.939385   15039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:39:08.939603   15039 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:39:08.939718   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:39:08.940400   15039 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1-1717668912-19038/minikube-v1.33.1-1717668912-19038-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-1717668912-19038-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/Users:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-486000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!
s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0610 20:39:08.962007   15039 out.go:177] * Pausing node default-k8s-diff-port-486000 ... 
	I0610 20:39:09.003789   15039 host.go:66] Checking if "default-k8s-diff-port-486000" exists ...
	I0610 20:39:09.004291   15039 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 20:39:09.004336   15039 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 20:39:09.013968   15039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59017
	I0610 20:39:09.014343   15039 main.go:141] libmachine: () Calling .GetVersion
	I0610 20:39:09.014672   15039 main.go:141] libmachine: Using API Version  1
	I0610 20:39:09.014680   15039 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 20:39:09.014889   15039 main.go:141] libmachine: () Calling .GetMachineName
	I0610 20:39:09.014997   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .DriverName
	I0610 20:39:09.015159   15039 ssh_runner.go:195] Run: systemctl --version
	I0610 20:39:09.015180   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHHostname
	I0610 20:39:09.015261   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHPort
	I0610 20:39:09.015337   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHKeyPath
	I0610 20:39:09.015460   15039 main.go:141] libmachine: (default-k8s-diff-port-486000) Calling .GetSSHUsername
	I0610 20:39:09.015544   15039 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/default-k8s-diff-port-486000/id_rsa Username:docker}
	I0610 20:39:09.045747   15039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 20:39:09.056000   15039 pause.go:51] kubelet running: false
	I0610 20:39:09.056075   15039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0610 20:39:09.066148   15039 retry.go:31] will retry after 242.752636ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0610 20:39:09.310368   15039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 20:39:09.322694   15039 pause.go:51] kubelet running: false
	I0610 20:39:09.322775   15039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0610 20:39:09.332690   15039 retry.go:31] will retry after 511.072873ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0610 20:39:09.845134   15039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 20:39:09.857441   15039 pause.go:51] kubelet running: false
	I0610 20:39:09.857507   15039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0610 20:39:09.867734   15039 retry.go:31] will retry after 464.201675ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0610 20:39:10.334044   15039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 20:39:10.346172   15039 pause.go:51] kubelet running: false
	I0610 20:39:10.346224   15039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0610 20:39:10.378672   15039 out.go:177] 
	W0610 20:39:10.400510   15039 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0610 20:39:10.400536   15039 out.go:239] * 
	* 
	W0610 20:39:10.418172   15039 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 20:39:10.487343   15039 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-486000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (142.421667ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:39:10.649486   15044 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 6 (143.757567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 20:39:10.793278   15049 status.go:417] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-486000" does not appear in /Users/jenkins/minikube-integration/19046-5942/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-486000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.94s)

                                                
                                    

Test pass (293/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.38
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.1/json-events 11.39
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.32
18 TestDownloadOnly/v1.30.1/DeleteAll 0.39
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.36
21 TestBinaryMirror 0.99
22 TestOffline 70.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
29 TestCertExpiration 247.48
30 TestDockerFlags 52.52
31 TestForceSystemdFlag 42.19
32 TestForceSystemdEnv 41.56
35 TestHyperKitDriverInstallOrUpdate 6.77
38 TestErrorSpam/setup 38.39
39 TestErrorSpam/start 1.63
40 TestErrorSpam/status 0.5
41 TestErrorSpam/pause 1.38
42 TestErrorSpam/unpause 1.33
43 TestErrorSpam/stop 155.83
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 64.07
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 66.05
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.07
54 TestFunctional/serial/CacheCmd/cache/add_remote 4.94
55 TestFunctional/serial/CacheCmd/cache/add_local 1.42
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
57 TestFunctional/serial/CacheCmd/cache/list 0.08
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
60 TestFunctional/serial/CacheCmd/cache/delete 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.93
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.35
63 TestFunctional/serial/ExtraConfig 40.8
64 TestFunctional/serial/ComponentHealth 0.05
65 TestFunctional/serial/LogsCmd 2.72
66 TestFunctional/serial/LogsFileCmd 2.67
67 TestFunctional/serial/InvalidService 4.38
69 TestFunctional/parallel/ConfigCmd 0.52
70 TestFunctional/parallel/DashboardCmd 11.18
71 TestFunctional/parallel/DryRun 1.09
72 TestFunctional/parallel/InternationalLanguage 0.65
73 TestFunctional/parallel/StatusCmd 0.5
77 TestFunctional/parallel/ServiceCmdConnect 7.6
78 TestFunctional/parallel/AddonsCmd 0.27
79 TestFunctional/parallel/PersistentVolumeClaim 27.17
81 TestFunctional/parallel/SSHCmd 0.31
82 TestFunctional/parallel/CpCmd 0.93
83 TestFunctional/parallel/MySQL 26.3
84 TestFunctional/parallel/FileSync 0.21
85 TestFunctional/parallel/CertSync 1.09
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
93 TestFunctional/parallel/License 0.85
94 TestFunctional/parallel/Version/short 0.15
95 TestFunctional/parallel/Version/components 0.49
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
100 TestFunctional/parallel/ImageCommands/ImageBuild 3.06
101 TestFunctional/parallel/ImageCommands/Setup 3.16
102 TestFunctional/parallel/DockerEnv/bash 0.74
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.34
107 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.15
108 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.31
109 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.18
110 TestFunctional/parallel/ImageCommands/ImageRemove 0.37
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.25
113 TestFunctional/parallel/ServiceCmd/DeployApp 11.12
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.15
119 TestFunctional/parallel/ServiceCmd/List 0.37
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
122 TestFunctional/parallel/ServiceCmd/Format 0.26
123 TestFunctional/parallel/ServiceCmd/URL 0.28
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
131 TestFunctional/parallel/ProfileCmd/profile_list 0.29
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
133 TestFunctional/parallel/MountCmd/any-port 6.84
134 TestFunctional/parallel/MountCmd/specific-port 1.38
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.81
136 TestFunctional/delete_addon-resizer_images 0.13
137 TestFunctional/delete_my-image_image 0.05
138 TestFunctional/delete_minikube_cached_images 0.05
142 TestMultiControlPlane/serial/StartCluster 209.26
143 TestMultiControlPlane/serial/DeployApp 6.03
144 TestMultiControlPlane/serial/PingHostFromPods 1.34
145 TestMultiControlPlane/serial/AddWorkerNode 42.03
146 TestMultiControlPlane/serial/NodeLabels 0.06
147 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.48
148 TestMultiControlPlane/serial/CopyFile 9.45
149 TestMultiControlPlane/serial/StopSecondaryNode 8.72
150 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.31
151 TestMultiControlPlane/serial/RestartSecondaryNode 39.67
152 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
153 TestMultiControlPlane/serial/RestartClusterKeepsNodes 323.08
154 TestMultiControlPlane/serial/DeleteSecondaryNode 8.3
156 TestMultiControlPlane/serial/StopCluster 249.53
157 TestMultiControlPlane/serial/RestartCluster 105.04
163 TestImageBuild/serial/Setup 39.96
164 TestImageBuild/serial/NormalBuild 2.67
165 TestImageBuild/serial/BuildWithBuildArg 0.52
166 TestImageBuild/serial/BuildWithDockerIgnore 0.25
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.23
171 TestJSONOutput/start/Command 54.21
172 TestJSONOutput/start/Audit 0
174 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/pause/Command 0.48
178 TestJSONOutput/pause/Audit 0
180 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/unpause/Command 0.46
184 TestJSONOutput/unpause/Audit 0
186 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/stop/Command 8.35
190 TestJSONOutput/stop/Audit 0
192 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
194 TestErrorJSONOutput 0.76
199 TestMainNoArgs 0.08
200 TestMinikubeProfile 93.12
203 TestMountStart/serial/StartWithMountFirst 19.37
204 TestMountStart/serial/VerifyMountFirst 0.3
205 TestMountStart/serial/StartWithMountSecond 21.31
206 TestMountStart/serial/VerifyMountSecond 0.3
207 TestMountStart/serial/DeleteFirst 2.38
208 TestMountStart/serial/VerifyMountPostDelete 0.3
209 TestMountStart/serial/Stop 8.42
210 TestMountStart/serial/RestartStopped 42.69
211 TestMountStart/serial/VerifyMountPostStop 0.3
214 TestMultiNode/serial/FreshStart2Nodes 129.79
215 TestMultiNode/serial/DeployApp2Nodes 5.49
216 TestMultiNode/serial/PingHostFrom2Pods 0.91
217 TestMultiNode/serial/AddNode 67.29
218 TestMultiNode/serial/MultiNodeLabels 0.05
219 TestMultiNode/serial/ProfileList 0.21
220 TestMultiNode/serial/CopyFile 5.39
221 TestMultiNode/serial/StopNode 2.87
225 TestMultiNode/serial/StopMultiNode 16.79
226 TestMultiNode/serial/RestartMultiNode 73.93
227 TestMultiNode/serial/ValidateNameConflict 48.13
231 TestPreload 233.52
234 TestSkaffold 232.87
237 TestRunningBinaryUpgrade 88.63
239 TestKubernetesUpgrade 236.61
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.28
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.38
254 TestStoppedBinaryUpgrade/Setup 1.89
255 TestStoppedBinaryUpgrade/Upgrade 94.48
256 TestStoppedBinaryUpgrade/MinikubeLogs 2.65
258 TestPause/serial/Start 50.68
259 TestPause/serial/SecondStartNoReconfiguration 41.54
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.71
269 TestNoKubernetes/serial/StartWithK8s 38.63
270 TestPause/serial/Pause 0.61
271 TestPause/serial/VerifyStatus 0.19
272 TestPause/serial/Unpause 0.61
273 TestPause/serial/PauseAgain 0.68
274 TestPause/serial/DeletePaused 5.24
275 TestPause/serial/VerifyDeletedResources 0.28
276 TestNetworkPlugins/group/auto/Start 90.83
277 TestNoKubernetes/serial/StartWithStopK8s 19.45
278 TestNoKubernetes/serial/Start 21.31
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
280 TestNoKubernetes/serial/ProfileList 0.55
281 TestNoKubernetes/serial/Stop 2.38
282 TestNoKubernetes/serial/StartNoArgs 19.41
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
284 TestNetworkPlugins/group/kindnet/Start 63.52
285 TestNetworkPlugins/group/auto/KubeletFlags 0.16
286 TestNetworkPlugins/group/auto/NetCatPod 11.15
287 TestNetworkPlugins/group/auto/DNS 0.13
288 TestNetworkPlugins/group/auto/Localhost 0.11
289 TestNetworkPlugins/group/auto/HairPin 0.1
290 TestNetworkPlugins/group/calico/Start 73.56
291 TestNetworkPlugins/group/kindnet/ControllerPod 6
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
293 TestNetworkPlugins/group/kindnet/NetCatPod 12.15
294 TestNetworkPlugins/group/kindnet/DNS 0.13
295 TestNetworkPlugins/group/kindnet/Localhost 0.11
296 TestNetworkPlugins/group/kindnet/HairPin 0.1
297 TestNetworkPlugins/group/custom-flannel/Start 64.83
298 TestNetworkPlugins/group/calico/ControllerPod 6.01
299 TestNetworkPlugins/group/calico/KubeletFlags 0.15
300 TestNetworkPlugins/group/calico/NetCatPod 12.15
301 TestNetworkPlugins/group/calico/DNS 0.13
302 TestNetworkPlugins/group/calico/Localhost 0.14
303 TestNetworkPlugins/group/calico/HairPin 0.11
304 TestNetworkPlugins/group/false/Start 62.01
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.14
307 TestNetworkPlugins/group/custom-flannel/DNS 0.13
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
310 TestNetworkPlugins/group/enable-default-cni/Start 54.5
311 TestNetworkPlugins/group/false/KubeletFlags 0.2
312 TestNetworkPlugins/group/false/NetCatPod 10.16
313 TestNetworkPlugins/group/false/DNS 0.12
314 TestNetworkPlugins/group/false/Localhost 0.1
315 TestNetworkPlugins/group/false/HairPin 0.11
316 TestNetworkPlugins/group/flannel/Start 62.97
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.15
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
322 TestNetworkPlugins/group/bridge/Start 55.96
323 TestNetworkPlugins/group/flannel/ControllerPod 6
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
325 TestNetworkPlugins/group/flannel/NetCatPod 13.15
326 TestNetworkPlugins/group/flannel/DNS 0.13
327 TestNetworkPlugins/group/flannel/Localhost 0.11
328 TestNetworkPlugins/group/flannel/HairPin 0.09
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
330 TestNetworkPlugins/group/bridge/NetCatPod 12.15
331 TestNetworkPlugins/group/kubenet/Start 54.89
332 TestNetworkPlugins/group/bridge/DNS 0.12
333 TestNetworkPlugins/group/bridge/Localhost 0.11
334 TestNetworkPlugins/group/bridge/HairPin 0.1
336 TestStartStop/group/old-k8s-version/serial/FirstStart 143.75
337 TestNetworkPlugins/group/kubenet/KubeletFlags 0.18
338 TestNetworkPlugins/group/kubenet/NetCatPod 11.14
339 TestNetworkPlugins/group/kubenet/DNS 0.13
340 TestNetworkPlugins/group/kubenet/Localhost 0.1
341 TestNetworkPlugins/group/kubenet/HairPin 0.1
343 TestStartStop/group/no-preload/serial/FirstStart 58.42
344 TestStartStop/group/no-preload/serial/DeployApp 9.21
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
346 TestStartStop/group/no-preload/serial/Stop 8.49
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
348 TestStartStop/group/no-preload/serial/SecondStart 287.23
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.33
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
351 TestStartStop/group/old-k8s-version/serial/Stop 8.4
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.33
353 TestStartStop/group/old-k8s-version/serial/SecondStart 400.04
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
357 TestStartStop/group/no-preload/serial/Pause 1.92
359 TestStartStop/group/embed-certs/serial/FirstStart 61.53
360 TestStartStop/group/embed-certs/serial/DeployApp 10.2
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.77
362 TestStartStop/group/embed-certs/serial/Stop 8.41
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
364 TestStartStop/group/embed-certs/serial/SecondStart 288.63
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.17
368 TestStartStop/group/old-k8s-version/serial/Pause 2.01
370 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54
371 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.21
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.72
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.43
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
381 TestStartStop/group/newest-cni/serial/FirstStart 48.71
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.06
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
385 TestStartStop/group/embed-certs/serial/Pause 2.08
386 TestStartStop/group/newest-cni/serial/DeployApp 0
387 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
388 TestStartStop/group/newest-cni/serial/Stop 8.41
389 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.36
390 TestStartStop/group/newest-cni/serial/SecondStart 28.82
391 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
393 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.14
394 TestStartStop/group/newest-cni/serial/Pause 1.81
x
+
TestDownloadOnly/v1.20.0/json-events (18.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-311000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-311000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (18.656964304s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-311000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-311000: exit status 85 (296.693798ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-311000 | jenkins | v1.33.1 | 10 Jun 24 18:52 PDT |          |
	|         | -p download-only-311000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 18:52:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 18:52:06.119104    6487 out.go:291] Setting OutFile to fd 1 ...
	I0610 18:52:06.119395    6487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:06.119400    6487 out.go:304] Setting ErrFile to fd 2...
	I0610 18:52:06.119403    6487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:06.119580    6487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	W0610 18:52:06.119683    6487 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19046-5942/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19046-5942/.minikube/config/config.json: no such file or directory
	I0610 18:52:06.121431    6487 out.go:298] Setting JSON to true
	I0610 18:52:06.143987    6487 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22882,"bootTime":1718047844,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 18:52:06.144070    6487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 18:52:06.165699    6487 out.go:97] [download-only-311000] minikube v1.33.1 on Darwin 14.4.1
	I0610 18:52:06.187611    6487 out.go:169] MINIKUBE_LOCATION=19046
	I0610 18:52:06.165951    6487 notify.go:220] Checking for updates...
	W0610 18:52:06.165943    6487 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 18:52:06.231636    6487 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 18:52:06.253898    6487 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 18:52:06.275589    6487 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 18:52:06.296847    6487 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	W0610 18:52:06.339421    6487 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 18:52:06.339881    6487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 18:52:06.370628    6487 out.go:97] Using the hyperkit driver based on user configuration
	I0610 18:52:06.370722    6487 start.go:297] selected driver: hyperkit
	I0610 18:52:06.370741    6487 start.go:901] validating driver "hyperkit" against <nil>
	I0610 18:52:06.370956    6487 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:06.371179    6487 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 18:52:06.601126    6487 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 18:52:06.605073    6487 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 18:52:06.605096    6487 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 18:52:06.605120    6487 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 18:52:06.608348    6487 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0610 18:52:06.608505    6487 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 18:52:06.608532    6487 cni.go:84] Creating CNI manager for ""
	I0610 18:52:06.608546    6487 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 18:52:06.608612    6487 start.go:340] cluster config:
	{Name:download-only-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 18:52:06.608832    6487 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:06.631017    6487 out.go:97] Downloading VM boot image ...
	I0610 18:52:06.631113    6487 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 18:52:13.150652    6487 out.go:97] Starting "download-only-311000" primary control-plane node in "download-only-311000" cluster
	I0610 18:52:13.150730    6487 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 18:52:13.258532    6487 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0610 18:52:13.258578    6487 cache.go:56] Caching tarball of preloaded images
	I0610 18:52:13.259677    6487 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 18:52:13.281043    6487 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 18:52:13.281106    6487 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:13.515983    6487 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0610 18:52:20.423338    6487 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:20.423569    6487 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:20.964517    6487 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 18:52:20.964805    6487 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/download-only-311000/config.json ...
	I0610 18:52:20.964880    6487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/download-only-311000/config.json: {Name:mkdfddcd86bde68ef1e53b1b3a7d3c1e4545bd2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 18:52:20.965249    6487 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 18:52:20.966577    6487 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-311000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-311000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-311000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (11.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-017000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-017000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=hyperkit : (11.392013937s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (11.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-017000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-017000: exit status 85 (321.591143ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-311000 | jenkins | v1.33.1 | 10 Jun 24 18:52 PDT |                     |
	|         | -p download-only-311000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 10 Jun 24 18:52 PDT | 10 Jun 24 18:52 PDT |
	| delete  | -p download-only-311000        | download-only-311000 | jenkins | v1.33.1 | 10 Jun 24 18:52 PDT | 10 Jun 24 18:52 PDT |
	| start   | -o=json --download-only        | download-only-017000 | jenkins | v1.33.1 | 10 Jun 24 18:52 PDT |                     |
	|         | -p download-only-017000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 18:52:25
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 18:52:25.828611    6521 out.go:291] Setting OutFile to fd 1 ...
	I0610 18:52:25.829345    6521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:25.829352    6521 out.go:304] Setting ErrFile to fd 2...
	I0610 18:52:25.829356    6521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 18:52:25.829964    6521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 18:52:25.831495    6521 out.go:298] Setting JSON to true
	I0610 18:52:25.853609    6521 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22901,"bootTime":1718047844,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 18:52:25.853701    6521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 18:52:25.874945    6521 out.go:97] [download-only-017000] minikube v1.33.1 on Darwin 14.4.1
	I0610 18:52:25.896798    6521 out.go:169] MINIKUBE_LOCATION=19046
	I0610 18:52:25.875184    6521 notify.go:220] Checking for updates...
	I0610 18:52:25.940800    6521 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 18:52:25.961775    6521 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 18:52:25.983770    6521 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 18:52:26.004872    6521 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	W0610 18:52:26.046601    6521 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 18:52:26.047023    6521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 18:52:26.078729    6521 out.go:97] Using the hyperkit driver based on user configuration
	I0610 18:52:26.078782    6521 start.go:297] selected driver: hyperkit
	I0610 18:52:26.078838    6521 start.go:901] validating driver "hyperkit" against <nil>
	I0610 18:52:26.079067    6521 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:26.079293    6521 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19046-5942/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0610 18:52:26.089523    6521 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0610 18:52:26.093794    6521 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 18:52:26.093820    6521 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0610 18:52:26.093882    6521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 18:52:26.096811    6521 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0610 18:52:26.097011    6521 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 18:52:26.097103    6521 cni.go:84] Creating CNI manager for ""
	I0610 18:52:26.097140    6521 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 18:52:26.097150    6521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 18:52:26.097229    6521 start.go:340] cluster config:
	{Name:download-only-017000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 18:52:26.097351    6521 iso.go:125] acquiring lock: {Name:mk09656d383f321c39be8062546440df099fe7ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 18:52:26.118673    6521 out.go:97] Starting "download-only-017000" primary control-plane node in "download-only-017000" cluster
	I0610 18:52:26.118709    6521 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 18:52:26.217364    6521 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 18:52:26.217399    6521 cache.go:56] Caching tarball of preloaded images
	I0610 18:52:26.217831    6521 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 18:52:26.239745    6521 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0610 18:52:26.239784    6521 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:26.454631    6521 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0610 18:52:32.950164    6521 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:32.950454    6521 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0610 18:52:33.437098    6521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 18:52:33.437364    6521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/download-only-017000/config.json ...
	I0610 18:52:33.437389    6521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/download-only-017000/config.json: {Name:mke2ae9cbba17b5cbd65e6a8b979a448f1baf003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 18:52:33.438930    6521 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 18:52:33.439281    6521 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19046-5942/.minikube/cache/darwin/amd64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-017000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-017000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-017000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-172000 --alsologtostderr --binary-mirror http://127.0.0.1:50363 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-172000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-172000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestOffline (70.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-908000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-908000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m5.421339605s)
helpers_test.go:175: Cleaning up "offline-docker-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-908000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-908000: (5.279054007s)
--- PASS: TestOffline (70.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-992000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-992000: exit status 85 (168.391894ms)

                                                
                                                
-- stdout --
	* Profile "addons-992000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-992000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-992000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-992000: exit status 85 (188.458404ms)

                                                
                                                
-- stdout --
	* Profile "addons-992000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-992000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestCertExpiration (247.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-918000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-918000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (35.588464727s)
E0610 20:11:59.465302    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-918000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0610 20:14:37.840104    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-918000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (26.476571877s)
helpers_test.go:175: Cleaning up "cert-expiration-918000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-918000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-918000: (5.410619298s)
--- PASS: TestCertExpiration (247.48s)

                                                
                                    
x
+
TestDockerFlags (52.52s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-160000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0610 20:10:36.306966    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-160000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (46.914681461s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-160000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-160000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-160000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-160000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-160000: (5.294936511s)
--- PASS: TestDockerFlags (52.52s)

                                                
                                    
x
+
TestForceSystemdFlag (42.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-353000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-353000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (38.49425905s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-353000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-353000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-353000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-353000: (3.522386822s)
--- PASS: TestForceSystemdFlag (42.19s)

                                                
                                    
x
+
TestForceSystemdEnv (41.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-205000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-205000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.937898343s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-205000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-205000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-205000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-205000: (3.447213387s)
--- PASS: TestForceSystemdEnv (41.56s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.77s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.77s)

                                                
                                    
x
+
TestErrorSpam/setup (38.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-679000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-679000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 --driver=hyperkit : (38.388463133s)
--- PASS: TestErrorSpam/setup (38.39s)

                                                
                                    
x
+
TestErrorSpam/start (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 start --dry-run
--- PASS: TestErrorSpam/start (1.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (155.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop: (5.383476439s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop: (1m15.232694295s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-679000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-679000 stop: (1m15.210481468s)
--- PASS: TestErrorSpam/stop (155.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19046-5942/.minikube/files/etc/test/nested/copy/6485/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-192000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m4.073449442s)
--- PASS: TestFunctional/serial/StartWithProxy (64.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (66.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-192000 --alsologtostderr -v=8: (1m6.0525989s)
functional_test.go:659: soft start took 1m6.053062239s for "functional-192000" cluster.
--- PASS: TestFunctional/serial/SoftStart (66.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-192000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:3.1: (1.793861047s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:3.3: (1.773267935s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 cache add registry.k8s.io/pause:latest: (1.372527169s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4024942917/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache add minikube-local-cache-test:functional-192000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache delete minikube-local-cache-test:functional-192000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-192000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (147.317385ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 kubectl -- --context functional-192000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-192000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-192000 get pods: (1.352498614s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.35s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-192000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.798708655s)
functional_test.go:757: restart took 40.798848468s for "functional-192000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-192000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 logs: (2.724658595s)
--- PASS: TestFunctional/serial/LogsCmd (2.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1300836373/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd1300836373/001/logs.txt: (2.667627539s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-192000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-192000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-192000: exit status 115 (270.583341ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.8:30212 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-192000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 config get cpus: exit status 14 (69.314463ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 config get cpus: exit status 14 (57.993188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-192000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-192000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 7535: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-192000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (505.106885ms)

                                                
                                                
-- stdout --
	* [functional-192000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:01:32.947958    7487 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:01:32.948247    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:01:32.948252    7487 out.go:304] Setting ErrFile to fd 2...
	I0610 19:01:32.948256    7487 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:01:32.948429    7487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:01:32.949907    7487 out.go:298] Setting JSON to false
	I0610 19:01:32.972077    7487 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23448,"bootTime":1718047844,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:01:32.972194    7487 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:01:32.994625    7487 out.go:177] * [functional-192000] minikube v1.33.1 on Darwin 14.4.1
	I0610 19:01:33.038189    7487 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:01:33.038213    7487 notify.go:220] Checking for updates...
	I0610 19:01:33.081257    7487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:01:33.102283    7487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:01:33.123387    7487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:01:33.145471    7487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:01:33.167334    7487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:01:33.196058    7487 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:01:33.196528    7487 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:01:33.196581    7487 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:01:33.205588    7487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51275
	I0610 19:01:33.205982    7487 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:01:33.206389    7487 main.go:141] libmachine: Using API Version  1
	I0610 19:01:33.206399    7487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:01:33.206656    7487 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:01:33.206786    7487 main.go:141] libmachine: (functional-192000) Calling .DriverName
	I0610 19:01:33.206986    7487 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:01:33.207250    7487 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:01:33.207278    7487 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:01:33.215830    7487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51277
	I0610 19:01:33.216178    7487 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:01:33.216491    7487 main.go:141] libmachine: Using API Version  1
	I0610 19:01:33.216501    7487 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:01:33.216741    7487 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:01:33.216862    7487 main.go:141] libmachine: (functional-192000) Calling .DriverName
	I0610 19:01:33.243946    7487 out.go:177] * Using the hyperkit driver based on existing profile
	I0610 19:01:33.285963    7487 start.go:297] selected driver: hyperkit
	I0610 19:01:33.285991    7487 start.go:901] validating driver "hyperkit" against &{Name:functional-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:functional-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.8 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:01:33.286157    7487 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:01:33.311002    7487 out.go:177] 
	W0610 19:01:33.331757    7487 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 19:01:33.352985    7487 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-192000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-192000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (651.84865ms)

                                                
                                                
-- stdout --
	* [functional-192000] minikube v1.33.1 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:01:33.604841    7500 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:01:33.605085    7500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:01:33.605092    7500 out.go:304] Setting ErrFile to fd 2...
	I0610 19:01:33.605097    7500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:01:33.605345    7500 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:01:33.626661    7500 out.go:298] Setting JSON to false
	I0610 19:01:33.650496    7500 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23449,"bootTime":1718047844,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0610 19:01:33.650592    7500 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 19:01:33.690030    7500 out.go:177] * [functional-192000] minikube v1.33.1 sur Darwin 14.4.1
	I0610 19:01:33.752759    7500 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 19:01:33.731848    7500 notify.go:220] Checking for updates...
	I0610 19:01:33.815767    7500 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	I0610 19:01:33.878777    7500 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0610 19:01:33.899785    7500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 19:01:33.962704    7500 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	I0610 19:01:33.983917    7500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 19:01:34.005122    7500 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:01:34.005512    7500 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:01:34.005557    7500 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:01:34.014711    7500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51287
	I0610 19:01:34.015129    7500 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:01:34.015536    7500 main.go:141] libmachine: Using API Version  1
	I0610 19:01:34.015545    7500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:01:34.015779    7500 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:01:34.015886    7500 main.go:141] libmachine: (functional-192000) Calling .DriverName
	I0610 19:01:34.016074    7500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 19:01:34.016457    7500 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:01:34.016481    7500 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:01:34.025753    7500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51289
	I0610 19:01:34.026101    7500 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:01:34.026477    7500 main.go:141] libmachine: Using API Version  1
	I0610 19:01:34.026495    7500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:01:34.026697    7500 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:01:34.026818    7500 main.go:141] libmachine: (functional-192000) Calling .DriverName
	I0610 19:01:34.055808    7500 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0610 19:01:34.097741    7500 start.go:297] selected driver: hyperkit
	I0610 19:01:34.097757    7500 start.go:901] validating driver "hyperkit" against &{Name:functional-192000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:functional-192000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.8 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 19:01:34.097874    7500 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 19:01:34.121701    7500 out.go:177] 
	W0610 19:01:34.142858    7500 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 19:01:34.163859    7500 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-192000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-192000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-h647b" [a5a9eb48-2757-43f3-aed2-6c17cfcf1d99] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-h647b" [a5a9eb48-2757-43f3-aed2-6c17cfcf1d99] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005345588s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.8:31696
functional_test.go:1671: http://192.169.0.8:31696: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-h647b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.8:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.8:31696
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [016befe8-1fc1-4c41-bb80-8bbf5e478b90] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005402171s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-192000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-192000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-192000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-192000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [582f47be-19aa-4a10-94c7-d825339d5156] Pending
helpers_test.go:344: "sp-pod" [582f47be-19aa-4a10-94c7-d825339d5156] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [582f47be-19aa-4a10-94c7-d825339d5156] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004784226s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-192000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-192000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-192000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9a089db9-4afa-4503-a1cc-e1c8a1f8fbeb] Pending
helpers_test.go:344: "sp-pod" [9a089db9-4afa-4503-a1cc-e1c8a1f8fbeb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9a089db9-4afa-4503-a1cc-e1c8a1f8fbeb] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002827136s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-192000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh -n functional-192000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cp functional-192000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd66556885/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh -n functional-192000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh -n functional-192000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-192000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-97pdf" [dfd4efcb-67d2-49b3-8c7f-7dbe1b20fb96] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-97pdf" [dfd4efcb-67d2-49b3-8c7f-7dbe1b20fb96] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.002898412s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;": exit status 1 (207.690141ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;": exit status 1 (199.372624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;": exit status 1 (109.906614ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-192000 exec mysql-64454c8b5c-97pdf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6485/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /etc/test/nested/copy/6485/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /etc/ssl/certs/6485.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /usr/share/ca-certificates/6485.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/64852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /etc/ssl/certs/64852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/64852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /usr/share/ca-certificates/64852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-192000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "sudo systemctl is-active crio": exit status 1 (140.427195ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-192000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-192000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-192000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-192000 image ls --format short --alsologtostderr:
I0610 19:01:35.696440    7536 out.go:291] Setting OutFile to fd 1 ...
I0610 19:01:35.696761    7536 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:35.696767    7536 out.go:304] Setting ErrFile to fd 2...
I0610 19:01:35.696771    7536 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:35.696970    7536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:01:35.697606    7536 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:35.697706    7536 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:35.698075    7536 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:35.698128    7536 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:35.707114    7536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51336
I0610 19:01:35.707801    7536 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:35.708254    7536 main.go:141] libmachine: Using API Version  1
I0610 19:01:35.708264    7536 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:35.708509    7536 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:35.708634    7536 main.go:141] libmachine: (functional-192000) Calling .GetState
I0610 19:01:35.708737    7536 main.go:141] libmachine: (functional-192000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:01:35.708811    7536 main.go:141] libmachine: (functional-192000) DBG | hyperkit pid from json: 6779
I0610 19:01:35.710242    7536 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:35.710266    7536 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:35.719408    7536 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51339
I0610 19:01:35.719957    7536 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:35.720413    7536 main.go:141] libmachine: Using API Version  1
I0610 19:01:35.720431    7536 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:35.720768    7536 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:35.720950    7536 main.go:141] libmachine: (functional-192000) Calling .DriverName
I0610 19:01:35.721233    7536 ssh_runner.go:195] Run: systemctl --version
I0610 19:01:35.721253    7536 main.go:141] libmachine: (functional-192000) Calling .GetSSHHostname
I0610 19:01:35.721352    7536 main.go:141] libmachine: (functional-192000) Calling .GetSSHPort
I0610 19:01:35.721433    7536 main.go:141] libmachine: (functional-192000) Calling .GetSSHKeyPath
I0610 19:01:35.721555    7536 main.go:141] libmachine: (functional-192000) Calling .GetSSHUsername
I0610 19:01:35.721639    7536 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/functional-192000/id_rsa Username:docker}
I0610 19:01:35.759417    7536 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 19:01:35.781125    7536 main.go:141] libmachine: Making call to close driver server
I0610 19:01:35.781171    7536 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:35.781455    7536 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:35.781459    7536 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:35.781464    7536 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:35.781471    7536 main.go:141] libmachine: Making call to close driver server
I0610 19:01:35.781482    7536 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:35.781646    7536 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:35.781654    7536 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-192000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-192000 | c56d856cf5c84 | 30B    |
| docker.io/library/nginx                     | latest            | 4f67c83422ec7 | 188MB  |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-192000 | 5ee9f162f3117 | 1.24MB |
| docker.io/library/nginx                     | alpine            | 70ea0d8cc5300 | 48.3MB |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| gcr.io/google-containers/addon-resizer      | functional-192000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-192000 image ls --format table --alsologtostderr:
I0610 19:01:39.283392    7561 out.go:291] Setting OutFile to fd 1 ...
I0610 19:01:39.283595    7561 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:39.283604    7561 out.go:304] Setting ErrFile to fd 2...
I0610 19:01:39.283608    7561 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:39.283791    7561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:01:39.284426    7561 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:39.284531    7561 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:39.284893    7561 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:39.284936    7561 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:39.294028    7561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51377
I0610 19:01:39.294462    7561 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:39.294909    7561 main.go:141] libmachine: Using API Version  1
I0610 19:01:39.294923    7561 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:39.295165    7561 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:39.295287    7561 main.go:141] libmachine: (functional-192000) Calling .GetState
I0610 19:01:39.295378    7561 main.go:141] libmachine: (functional-192000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:01:39.295455    7561 main.go:141] libmachine: (functional-192000) DBG | hyperkit pid from json: 6779
I0610 19:01:39.296886    7561 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:39.296908    7561 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:39.305724    7561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51379
I0610 19:01:39.306209    7561 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:39.306574    7561 main.go:141] libmachine: Using API Version  1
I0610 19:01:39.306588    7561 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:39.306835    7561 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:39.306955    7561 main.go:141] libmachine: (functional-192000) Calling .DriverName
I0610 19:01:39.307137    7561 ssh_runner.go:195] Run: systemctl --version
I0610 19:01:39.307158    7561 main.go:141] libmachine: (functional-192000) Calling .GetSSHHostname
I0610 19:01:39.307250    7561 main.go:141] libmachine: (functional-192000) Calling .GetSSHPort
I0610 19:01:39.307347    7561 main.go:141] libmachine: (functional-192000) Calling .GetSSHKeyPath
I0610 19:01:39.307441    7561 main.go:141] libmachine: (functional-192000) Calling .GetSSHUsername
I0610 19:01:39.307530    7561 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/functional-192000/id_rsa Username:docker}
I0610 19:01:39.341371    7561 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 19:01:39.366974    7561 main.go:141] libmachine: Making call to close driver server
I0610 19:01:39.367005    7561 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:39.367288    7561 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:39.367332    7561 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:39.367360    7561 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:39.367368    7561 main.go:141] libmachine: Making call to close driver server
I0610 19:01:39.367373    7561 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:39.367648    7561 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:39.367729    7561 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:39.367780    7561 main.go:141] libmachine: Making call to close connection to plugin binary
2024/06/10 19:01:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-192000 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-192000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c56d856cf5c84d129d33d9e71c4667ae3abe30a6a69768be0e9d0b4416bd7dd1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-192000"],"size":"30"},{"id":"25a1387cdab82
166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9d
a","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"5ee9f162f3117e266c465f413a4d21605d47af75fb3190188060d52e00da9036","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-192000"],"size":"1240000"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"}
,{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-192000 image ls --format json --alsologtostderr:
I0610 19:01:39.110142    7557 out.go:291] Setting OutFile to fd 1 ...
I0610 19:01:39.110357    7557 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:39.110362    7557 out.go:304] Setting ErrFile to fd 2...
I0610 19:01:39.110366    7557 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:39.110560    7557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:01:39.111174    7557 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:39.111267    7557 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:39.111617    7557 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:39.111663    7557 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:39.120523    7557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51372
I0610 19:01:39.121104    7557 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:39.121620    7557 main.go:141] libmachine: Using API Version  1
I0610 19:01:39.121641    7557 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:39.121875    7557 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:39.122004    7557 main.go:141] libmachine: (functional-192000) Calling .GetState
I0610 19:01:39.122095    7557 main.go:141] libmachine: (functional-192000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:01:39.122168    7557 main.go:141] libmachine: (functional-192000) DBG | hyperkit pid from json: 6779
I0610 19:01:39.123576    7557 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:39.123597    7557 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:39.132218    7557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51374
I0610 19:01:39.132600    7557 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:39.133069    7557 main.go:141] libmachine: Using API Version  1
I0610 19:01:39.133109    7557 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:39.133428    7557 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:39.133580    7557 main.go:141] libmachine: (functional-192000) Calling .DriverName
I0610 19:01:39.133747    7557 ssh_runner.go:195] Run: systemctl --version
I0610 19:01:39.133772    7557 main.go:141] libmachine: (functional-192000) Calling .GetSSHHostname
I0610 19:01:39.133856    7557 main.go:141] libmachine: (functional-192000) Calling .GetSSHPort
I0610 19:01:39.133926    7557 main.go:141] libmachine: (functional-192000) Calling .GetSSHKeyPath
I0610 19:01:39.134025    7557 main.go:141] libmachine: (functional-192000) Calling .GetSSHUsername
I0610 19:01:39.134102    7557 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/functional-192000/id_rsa Username:docker}
I0610 19:01:39.165193    7557 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 19:01:39.199416    7557 main.go:141] libmachine: Making call to close driver server
I0610 19:01:39.199425    7557 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:39.199665    7557 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:39.199677    7557 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:39.199685    7557 main.go:141] libmachine: Making call to close driver server
I0610 19:01:39.199689    7557 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:39.199694    7557 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:39.199857    7557 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:39.199866    7557 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:39.199911    7557 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-192000 image ls --format yaml --alsologtostderr:
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c56d856cf5c84d129d33d9e71c4667ae3abe30a6a69768be0e9d0b4416bd7dd1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-192000
size: "30"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-192000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-192000 image ls --format yaml --alsologtostderr:
I0610 19:01:35.867890    7540 out.go:291] Setting OutFile to fd 1 ...
I0610 19:01:35.868185    7540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:35.868191    7540 out.go:304] Setting ErrFile to fd 2...
I0610 19:01:35.868196    7540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:35.868398    7540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:01:35.869043    7540 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:35.869135    7540 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:35.869510    7540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:35.869559    7540 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:35.878347    7540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51344
I0610 19:01:35.878950    7540 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:35.879382    7540 main.go:141] libmachine: Using API Version  1
I0610 19:01:35.879391    7540 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:35.879620    7540 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:35.879730    7540 main.go:141] libmachine: (functional-192000) Calling .GetState
I0610 19:01:35.879822    7540 main.go:141] libmachine: (functional-192000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:01:35.879908    7540 main.go:141] libmachine: (functional-192000) DBG | hyperkit pid from json: 6779
I0610 19:01:35.881448    7540 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:35.881505    7540 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:35.891289    7540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51346
I0610 19:01:35.891954    7540 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:35.892585    7540 main.go:141] libmachine: Using API Version  1
I0610 19:01:35.892599    7540 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:35.893049    7540 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:35.893203    7540 main.go:141] libmachine: (functional-192000) Calling .DriverName
I0610 19:01:35.893518    7540 ssh_runner.go:195] Run: systemctl --version
I0610 19:01:35.893538    7540 main.go:141] libmachine: (functional-192000) Calling .GetSSHHostname
I0610 19:01:35.893613    7540 main.go:141] libmachine: (functional-192000) Calling .GetSSHPort
I0610 19:01:35.893778    7540 main.go:141] libmachine: (functional-192000) Calling .GetSSHKeyPath
I0610 19:01:35.893863    7540 main.go:141] libmachine: (functional-192000) Calling .GetSSHUsername
I0610 19:01:35.893958    7540 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/functional-192000/id_rsa Username:docker}
I0610 19:01:35.930231    7540 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0610 19:01:35.950077    7540 main.go:141] libmachine: Making call to close driver server
I0610 19:01:35.950086    7540 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:35.950320    7540 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:35.950323    7540 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:35.950327    7540 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:35.950333    7540 main.go:141] libmachine: Making call to close driver server
I0610 19:01:35.950338    7540 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:35.950493    7540 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:35.950503    7540 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:35.950514    7540 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh pgrep buildkitd: exit status 1 (126.730564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image build -t localhost/my-image:functional-192000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image build -t localhost/my-image:functional-192000 testdata/build --alsologtostderr: (2.776282098s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-192000 image build -t localhost/my-image:functional-192000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3861b85c97f2
---> Removed intermediate container 3861b85c97f2
---> eb947c91bb5e
Step 3/3 : ADD content.txt /
---> 5ee9f162f311
Successfully built 5ee9f162f311
Successfully tagged localhost/my-image:functional-192000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-192000 image build -t localhost/my-image:functional-192000 testdata/build --alsologtostderr:
I0610 19:01:36.173765    7549 out.go:291] Setting OutFile to fd 1 ...
I0610 19:01:36.174069    7549 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:36.174075    7549 out.go:304] Setting ErrFile to fd 2...
I0610 19:01:36.174078    7549 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 19:01:36.174264    7549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
I0610 19:01:36.174920    7549 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:36.175575    7549 config.go:182] Loaded profile config "functional-192000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 19:01:36.175935    7549 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:36.175975    7549 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:36.184166    7549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51357
I0610 19:01:36.184595    7549 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:36.185014    7549 main.go:141] libmachine: Using API Version  1
I0610 19:01:36.185024    7549 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:36.185298    7549 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:36.185411    7549 main.go:141] libmachine: (functional-192000) Calling .GetState
I0610 19:01:36.185581    7549 main.go:141] libmachine: (functional-192000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0610 19:01:36.185709    7549 main.go:141] libmachine: (functional-192000) DBG | hyperkit pid from json: 6779
I0610 19:01:36.187190    7549 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0610 19:01:36.187229    7549 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0610 19:01:36.195941    7549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51359
I0610 19:01:36.196459    7549 main.go:141] libmachine: () Calling .GetVersion
I0610 19:01:36.196869    7549 main.go:141] libmachine: Using API Version  1
I0610 19:01:36.196884    7549 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 19:01:36.197118    7549 main.go:141] libmachine: () Calling .GetMachineName
I0610 19:01:36.197234    7549 main.go:141] libmachine: (functional-192000) Calling .DriverName
I0610 19:01:36.197413    7549 ssh_runner.go:195] Run: systemctl --version
I0610 19:01:36.197432    7549 main.go:141] libmachine: (functional-192000) Calling .GetSSHHostname
I0610 19:01:36.197512    7549 main.go:141] libmachine: (functional-192000) Calling .GetSSHPort
I0610 19:01:36.197582    7549 main.go:141] libmachine: (functional-192000) Calling .GetSSHKeyPath
I0610 19:01:36.197725    7549 main.go:141] libmachine: (functional-192000) Calling .GetSSHUsername
I0610 19:01:36.197824    7549 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/functional-192000/id_rsa Username:docker}
I0610 19:01:36.237032    7549 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3737451268.tar
I0610 19:01:36.237101    7549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 19:01:36.245984    7549 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3737451268.tar
I0610 19:01:36.249229    7549 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3737451268.tar: stat -c "%s %y" /var/lib/minikube/build/build.3737451268.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3737451268.tar': No such file or directory
I0610 19:01:36.249250    7549 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3737451268.tar --> /var/lib/minikube/build/build.3737451268.tar (3072 bytes)
I0610 19:01:36.270243    7549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3737451268
I0610 19:01:36.279572    7549 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3737451268 -xf /var/lib/minikube/build/build.3737451268.tar
I0610 19:01:36.288684    7549 docker.go:360] Building image: /var/lib/minikube/build/build.3737451268
I0610 19:01:36.288750    7549 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-192000 /var/lib/minikube/build/build.3737451268
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0610 19:01:38.845547    7549 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-192000 /var/lib/minikube/build/build.3737451268: (2.556863985s)
I0610 19:01:38.845605    7549 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3737451268
I0610 19:01:38.855377    7549 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3737451268.tar
I0610 19:01:38.864846    7549 build_images.go:217] Built localhost/my-image:functional-192000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3737451268.tar
I0610 19:01:38.864905    7549 build_images.go:133] succeeded building to: functional-192000
I0610 19:01:38.864908    7549 build_images.go:134] failed building to: 
I0610 19:01:38.864941    7549 main.go:141] libmachine: Making call to close driver server
I0610 19:01:38.864967    7549 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:38.865240    7549 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:38.865388    7549 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 19:01:38.865425    7549 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:38.865437    7549 main.go:141] libmachine: Making call to close driver server
I0610 19:01:38.865444    7549 main.go:141] libmachine: (functional-192000) Calling .Close
I0610 19:01:38.865711    7549 main.go:141] libmachine: (functional-192000) DBG | Closing plugin on server side
I0610 19:01:38.865715    7549 main.go:141] libmachine: Successfully made call to close driver server
I0610 19:01:38.865724    7549 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.098232659s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-192000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-192000 docker-env) && out/minikube-darwin-amd64 status -p functional-192000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-192000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr: (4.180202828s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr: (1.989291593s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.752150014s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-192000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image load --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr: (3.341746572s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image save gcr.io/google-containers/addon-resizer:functional-192000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image save gcr.io/google-containers/addon-resizer:functional-192000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.182601001s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image rm gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.034304589s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-192000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 image save --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-192000 image save --daemon gcr.io/google-containers/addon-resizer:functional-192000 --alsologtostderr: (1.134189067s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-192000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-192000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-192000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-s6t4t" [f05aea21-b043-4c63-bf63-09bd0c914f84] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-s6t4t" [f05aea21-b043-4c63-bf63-09bd0c914f84] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.005181838s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7237: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-192000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a40d80d6-1e84-47c8-98e6-816b70ffb72c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a40d80d6-1e84-47c8-98e6-816b70ffb72c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005035477s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service list -o json
functional_test.go:1490: Took "369.83299ms" to run "out/minikube-darwin-amd64 -p functional-192000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.8:30279
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.8:30279
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-192000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.13.220 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-192000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "210.902309ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.685715ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "210.171895ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "80.582163ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port150821685/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718071282870176000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port150821685/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718071282870176000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port150821685/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718071282870176000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port150821685/001/test-1718071282870176000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.594953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 11 02:01 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 11 02:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 11 02:01 test-1718071282870176000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh cat /mount-9p/test-1718071282870176000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-192000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7734a946-6f59-426a-bc43-7c1120284b72] Pending
helpers_test.go:344: "busybox-mount" [7734a946-6f59-426a-bc43-7c1120284b72] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7734a946-6f59-426a-bc43-7c1120284b72] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7734a946-6f59-426a-bc43-7c1120284b72] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004968482s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-192000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port150821685/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port272599930/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.241893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port272599930/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "sudo umount -f /mount-9p": exit status 1 (128.114908ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-192000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port272599930/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount1: exit status 1 (170.012659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount1: exit status 1 (220.95771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-192000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-192000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-192000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup94291095/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-192000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-192000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-192000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-868000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-868000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m28.87790026s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (209.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-868000 -- rollout status deployment/busybox: (3.601958021s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-jdbpd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-lxrx2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-psrx2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-jdbpd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-lxrx2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-psrx2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-jdbpd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-lxrx2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-psrx2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-jdbpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-jdbpd -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-lxrx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-lxrx2 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-psrx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-868000 -- exec busybox-fc5497c4f-psrx2 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (42.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-868000 -v=7 --alsologtostderr
E0610 19:05:36.303108    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.310712    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.322048    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.343654    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.385699    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.466734    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.627667    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:36.949153    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:37.590307    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:38.871148    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:41.432353    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:46.553430    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:05:56.794383    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-868000 -v=7 --alsologtostderr: (41.57205624s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (42.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-868000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp testdata/cp-test.txt ha-868000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile105350531/001/cp-test_ha-868000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000:/home/docker/cp-test.txt ha-868000-m02:/home/docker/cp-test_ha-868000_ha-868000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test_ha-868000_ha-868000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000:/home/docker/cp-test.txt ha-868000-m03:/home/docker/cp-test_ha-868000_ha-868000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test_ha-868000_ha-868000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000:/home/docker/cp-test.txt ha-868000-m04:/home/docker/cp-test_ha-868000_ha-868000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test_ha-868000_ha-868000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp testdata/cp-test.txt ha-868000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile105350531/001/cp-test_ha-868000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m02:/home/docker/cp-test.txt ha-868000:/home/docker/cp-test_ha-868000-m02_ha-868000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test_ha-868000-m02_ha-868000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m02:/home/docker/cp-test.txt ha-868000-m03:/home/docker/cp-test_ha-868000-m02_ha-868000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test_ha-868000-m02_ha-868000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m02:/home/docker/cp-test.txt ha-868000-m04:/home/docker/cp-test_ha-868000-m02_ha-868000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test_ha-868000-m02_ha-868000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp testdata/cp-test.txt ha-868000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile105350531/001/cp-test_ha-868000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m03:/home/docker/cp-test.txt ha-868000:/home/docker/cp-test_ha-868000-m03_ha-868000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test_ha-868000-m03_ha-868000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m03:/home/docker/cp-test.txt ha-868000-m02:/home/docker/cp-test_ha-868000-m03_ha-868000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test_ha-868000-m03_ha-868000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m03:/home/docker/cp-test.txt ha-868000-m04:/home/docker/cp-test_ha-868000-m03_ha-868000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test_ha-868000-m03_ha-868000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp testdata/cp-test.txt ha-868000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test.txt"
E0610 19:06:17.275394    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile105350531/001/cp-test_ha-868000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m04:/home/docker/cp-test.txt ha-868000:/home/docker/cp-test_ha-868000-m04_ha-868000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000 "sudo cat /home/docker/cp-test_ha-868000-m04_ha-868000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m04:/home/docker/cp-test.txt ha-868000-m02:/home/docker/cp-test_ha-868000-m04_ha-868000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m02 "sudo cat /home/docker/cp-test_ha-868000-m04_ha-868000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 cp ha-868000-m04:/home/docker/cp-test.txt ha-868000-m03:/home/docker/cp-test_ha-868000-m04_ha-868000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 ssh -n ha-868000-m03 "sudo cat /home/docker/cp-test_ha-868000-m04_ha-868000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-868000 node stop m02 -v=7 --alsologtostderr: (8.358094852s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr: exit status 7 (357.147409ms)

                                                
                                                
-- stdout --
	ha-868000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-868000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-868000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-868000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:06:27.734096    8056 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:06:27.735036    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:06:27.735043    8056 out.go:304] Setting ErrFile to fd 2...
	I0610 19:06:27.735047    8056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:06:27.735245    8056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:06:27.735421    8056 out.go:298] Setting JSON to false
	I0610 19:06:27.735443    8056 mustload.go:65] Loading cluster: ha-868000
	I0610 19:06:27.735485    8056 notify.go:220] Checking for updates...
	I0610 19:06:27.735756    8056 config.go:182] Loaded profile config "ha-868000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:06:27.735771    8056 status.go:255] checking status of ha-868000 ...
	I0610 19:06:27.736194    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.736234    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.745398    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52114
	I0610 19:06:27.745813    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.746237    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.746267    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.746479    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.746600    8056 main.go:141] libmachine: (ha-868000) Calling .GetState
	I0610 19:06:27.746684    8056 main.go:141] libmachine: (ha-868000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:06:27.746763    8056 main.go:141] libmachine: (ha-868000) DBG | hyperkit pid from json: 7592
	I0610 19:06:27.747833    8056 status.go:330] ha-868000 host status = "Running" (err=<nil>)
	I0610 19:06:27.747854    8056 host.go:66] Checking if "ha-868000" exists ...
	I0610 19:06:27.748093    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.748115    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.756615    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52116
	I0610 19:06:27.757220    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.758042    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.758060    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.758305    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.758428    8056 main.go:141] libmachine: (ha-868000) Calling .GetIP
	I0610 19:06:27.758524    8056 host.go:66] Checking if "ha-868000" exists ...
	I0610 19:06:27.758768    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.758789    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.767483    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52118
	I0610 19:06:27.767873    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.768217    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.768230    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.768445    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.768547    8056 main.go:141] libmachine: (ha-868000) Calling .DriverName
	I0610 19:06:27.768680    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:06:27.768704    8056 main.go:141] libmachine: (ha-868000) Calling .GetSSHHostname
	I0610 19:06:27.768784    8056 main.go:141] libmachine: (ha-868000) Calling .GetSSHPort
	I0610 19:06:27.768867    8056 main.go:141] libmachine: (ha-868000) Calling .GetSSHKeyPath
	I0610 19:06:27.768944    8056 main.go:141] libmachine: (ha-868000) Calling .GetSSHUsername
	I0610 19:06:27.769036    8056 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/ha-868000/id_rsa Username:docker}
	I0610 19:06:27.802772    8056 ssh_runner.go:195] Run: systemctl --version
	I0610 19:06:27.807052    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:06:27.818015    8056 kubeconfig.go:125] found "ha-868000" server: "https://192.169.0.254:8443"
	I0610 19:06:27.818040    8056 api_server.go:166] Checking apiserver status ...
	I0610 19:06:27.818080    8056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:06:27.829423    8056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1925/cgroup
	W0610 19:06:27.836670    8056 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1925/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:06:27.836719    8056 ssh_runner.go:195] Run: ls
	I0610 19:06:27.840018    8056 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0610 19:06:27.843194    8056 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0610 19:06:27.843205    8056 status.go:422] ha-868000 apiserver status = Running (err=<nil>)
	I0610 19:06:27.843213    8056 status.go:257] ha-868000 status: &{Name:ha-868000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:06:27.843225    8056 status.go:255] checking status of ha-868000-m02 ...
	I0610 19:06:27.843474    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.843503    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.852611    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52122
	I0610 19:06:27.852975    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.853324    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.853340    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.853552    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.853660    8056 main.go:141] libmachine: (ha-868000-m02) Calling .GetState
	I0610 19:06:27.853746    8056 main.go:141] libmachine: (ha-868000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:06:27.853822    8056 main.go:141] libmachine: (ha-868000-m02) DBG | hyperkit pid from json: 7608
	I0610 19:06:27.854854    8056 main.go:141] libmachine: (ha-868000-m02) DBG | hyperkit pid 7608 missing from process table
	I0610 19:06:27.854882    8056 status.go:330] ha-868000-m02 host status = "Stopped" (err=<nil>)
	I0610 19:06:27.854889    8056 status.go:343] host is not running, skipping remaining checks
	I0610 19:06:27.854895    8056 status.go:257] ha-868000-m02 status: &{Name:ha-868000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:06:27.854915    8056 status.go:255] checking status of ha-868000-m03 ...
	I0610 19:06:27.855180    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.855201    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.864125    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52124
	I0610 19:06:27.864652    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.865078    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.865087    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.865300    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.865406    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetState
	I0610 19:06:27.865477    8056 main.go:141] libmachine: (ha-868000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:06:27.865582    8056 main.go:141] libmachine: (ha-868000-m03) DBG | hyperkit pid from json: 7624
	I0610 19:06:27.866629    8056 status.go:330] ha-868000-m03 host status = "Running" (err=<nil>)
	I0610 19:06:27.866639    8056 host.go:66] Checking if "ha-868000-m03" exists ...
	I0610 19:06:27.866898    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.866918    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.875715    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52126
	I0610 19:06:27.876232    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.876552    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.876562    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.876770    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.876873    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetIP
	I0610 19:06:27.876953    8056 host.go:66] Checking if "ha-868000-m03" exists ...
	I0610 19:06:27.877203    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.877228    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.885941    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52128
	I0610 19:06:27.886325    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.886705    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.886726    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.886968    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.887088    8056 main.go:141] libmachine: (ha-868000-m03) Calling .DriverName
	I0610 19:06:27.887219    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:06:27.887231    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetSSHHostname
	I0610 19:06:27.887308    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetSSHPort
	I0610 19:06:27.887392    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetSSHKeyPath
	I0610 19:06:27.887474    8056 main.go:141] libmachine: (ha-868000-m03) Calling .GetSSHUsername
	I0610 19:06:27.887548    8056 sshutil.go:53] new ssh client: &{IP:192.169.0.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/ha-868000-m03/id_rsa Username:docker}
	I0610 19:06:27.920671    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:06:27.931842    8056 kubeconfig.go:125] found "ha-868000" server: "https://192.169.0.254:8443"
	I0610 19:06:27.931868    8056 api_server.go:166] Checking apiserver status ...
	I0610 19:06:27.931909    8056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:06:27.943516    8056 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1908/cgroup
	W0610 19:06:27.950933    8056 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1908/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:06:27.951009    8056 ssh_runner.go:195] Run: ls
	I0610 19:06:27.954508    8056 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0610 19:06:27.957700    8056 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0610 19:06:27.957711    8056 status.go:422] ha-868000-m03 apiserver status = Running (err=<nil>)
	I0610 19:06:27.957719    8056 status.go:257] ha-868000-m03 status: &{Name:ha-868000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:06:27.957756    8056 status.go:255] checking status of ha-868000-m04 ...
	I0610 19:06:27.958061    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.958084    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.967161    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52132
	I0610 19:06:27.967531    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.967831    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.967840    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.968039    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.968146    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetState
	I0610 19:06:27.968229    8056 main.go:141] libmachine: (ha-868000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:06:27.968316    8056 main.go:141] libmachine: (ha-868000-m04) DBG | hyperkit pid from json: 7727
	I0610 19:06:27.969390    8056 status.go:330] ha-868000-m04 host status = "Running" (err=<nil>)
	I0610 19:06:27.969401    8056 host.go:66] Checking if "ha-868000-m04" exists ...
	I0610 19:06:27.969658    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.969679    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.978299    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52134
	I0610 19:06:27.978681    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.979032    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.979046    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.979283    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.979393    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetIP
	I0610 19:06:27.979494    8056 host.go:66] Checking if "ha-868000-m04" exists ...
	I0610 19:06:27.979763    8056 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:06:27.979787    8056 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:06:27.988495    8056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52136
	I0610 19:06:27.988877    8056 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:06:27.989207    8056 main.go:141] libmachine: Using API Version  1
	I0610 19:06:27.989217    8056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:06:27.989493    8056 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:06:27.989619    8056 main.go:141] libmachine: (ha-868000-m04) Calling .DriverName
	I0610 19:06:27.989766    8056 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:06:27.989779    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetSSHHostname
	I0610 19:06:27.989866    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetSSHPort
	I0610 19:06:27.989962    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetSSHKeyPath
	I0610 19:06:27.990044    8056 main.go:141] libmachine: (ha-868000-m04) Calling .GetSSHUsername
	I0610 19:06:27.990128    8056 sshutil.go:53] new ssh client: &{IP:192.169.0.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/ha-868000-m04/id_rsa Username:docker}
	I0610 19:06:28.022639    8056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:06:28.033700    8056 status.go:257] ha-868000-m04 status: &{Name:ha-868000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 node start m02 -v=7 --alsologtostderr
E0610 19:06:58.235539    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-868000 node start m02 -v=7 --alsologtostderr: (39.152138733s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (323.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-868000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-868000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-868000 -v=7 --alsologtostderr: (27.157509157s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-868000 --wait=true -v=7 --alsologtostderr
E0610 19:08:20.153370    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:10:36.292992    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 19:11:03.988470    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-868000 --wait=true -v=7 --alsologtostderr: (4m55.803131409s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-868000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (323.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-868000 node delete m03 -v=7 --alsologtostderr: (7.847823834s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (249.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 stop -v=7 --alsologtostderr
E0610 19:20:36.271745    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-868000 stop -v=7 --alsologtostderr: (4m9.439378697s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr: exit status 7 (93.184044ms)

                                                
                                                
-- stdout --
	ha-868000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-868000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-868000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:20:36.822142    8628 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:20:36.822440    8628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:20:36.822445    8628 out.go:304] Setting ErrFile to fd 2...
	I0610 19:20:36.822449    8628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:20:36.822615    8628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:20:36.822802    8628 out.go:298] Setting JSON to false
	I0610 19:20:36.822825    8628 mustload.go:65] Loading cluster: ha-868000
	I0610 19:20:36.822860    8628 notify.go:220] Checking for updates...
	I0610 19:20:36.823127    8628 config.go:182] Loaded profile config "ha-868000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:20:36.823143    8628 status.go:255] checking status of ha-868000 ...
	I0610 19:20:36.823509    8628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:20:36.823567    8628 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:20:36.832540    8628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52454
	I0610 19:20:36.832863    8628 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:20:36.833298    8628 main.go:141] libmachine: Using API Version  1
	I0610 19:20:36.833309    8628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:20:36.833567    8628 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:20:36.833690    8628 main.go:141] libmachine: (ha-868000) Calling .GetState
	I0610 19:20:36.833786    8628 main.go:141] libmachine: (ha-868000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:20:36.833849    8628 main.go:141] libmachine: (ha-868000) DBG | hyperkit pid from json: 8154
	I0610 19:20:36.834823    8628 main.go:141] libmachine: (ha-868000) DBG | hyperkit pid 8154 missing from process table
	I0610 19:20:36.834886    8628 status.go:330] ha-868000 host status = "Stopped" (err=<nil>)
	I0610 19:20:36.834897    8628 status.go:343] host is not running, skipping remaining checks
	I0610 19:20:36.834904    8628 status.go:257] ha-868000 status: &{Name:ha-868000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:20:36.834932    8628 status.go:255] checking status of ha-868000-m02 ...
	I0610 19:20:36.835231    8628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:20:36.835260    8628 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:20:36.843555    8628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52457
	I0610 19:20:36.843856    8628 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:20:36.844210    8628 main.go:141] libmachine: Using API Version  1
	I0610 19:20:36.844227    8628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:20:36.844434    8628 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:20:36.844540    8628 main.go:141] libmachine: (ha-868000-m02) Calling .GetState
	I0610 19:20:36.844626    8628 main.go:141] libmachine: (ha-868000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:20:36.844694    8628 main.go:141] libmachine: (ha-868000-m02) DBG | hyperkit pid from json: 8247
	I0610 19:20:36.845631    8628 main.go:141] libmachine: (ha-868000-m02) DBG | hyperkit pid 8247 missing from process table
	I0610 19:20:36.845653    8628 status.go:330] ha-868000-m02 host status = "Stopped" (err=<nil>)
	I0610 19:20:36.845659    8628 status.go:343] host is not running, skipping remaining checks
	I0610 19:20:36.845666    8628 status.go:257] ha-868000-m02 status: &{Name:ha-868000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:20:36.845675    8628 status.go:255] checking status of ha-868000-m04 ...
	I0610 19:20:36.845923    8628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:20:36.845945    8628 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:20:36.854276    8628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52459
	I0610 19:20:36.854595    8628 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:20:36.854917    8628 main.go:141] libmachine: Using API Version  1
	I0610 19:20:36.854928    8628 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:20:36.855122    8628 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:20:36.855247    8628 main.go:141] libmachine: (ha-868000-m04) Calling .GetState
	I0610 19:20:36.855339    8628 main.go:141] libmachine: (ha-868000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:20:36.855419    8628 main.go:141] libmachine: (ha-868000-m04) DBG | hyperkit pid from json: 8337
	I0610 19:20:36.856356    8628 main.go:141] libmachine: (ha-868000-m04) DBG | hyperkit pid 8337 missing from process table
	I0610 19:20:36.856402    8628 status.go:330] ha-868000-m04 host status = "Stopped" (err=<nil>)
	I0610 19:20:36.856412    8628 status.go:343] host is not running, skipping remaining checks
	I0610 19:20:36.856418    8628 status.go:257] ha-868000-m04 status: &{Name:ha-868000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (249.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (105.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-868000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0610 19:21:59.326833    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-868000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (1m44.575244101s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-868000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (105.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (39.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-817000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-817000 --driver=hyperkit : (39.958626388s)
--- PASS: TestImageBuild/serial/Setup (39.96s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-817000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-817000: (2.667522877s)
--- PASS: TestImageBuild/serial/NormalBuild (2.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-817000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-817000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-817000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-434000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0610 19:35:36.241448    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-434000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (54.214596756s)
--- PASS: TestJSONOutput/start/Command (54.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-434000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-434000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-434000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-434000 --output=json --user=testUser: (8.348400588s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-901000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-901000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.001878ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75edd086-24a3-410a-86b2-d210356092c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-901000] minikube v1.33.1 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f78be1f8-ba19-41fc-8668-3ebde10267cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"517892c4-69cd-4828-b2e3-916eb8cbb528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig"}}
	{"specversion":"1.0","id":"bf350ed9-7ab2-4d83-b4f0-d15b273c0905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"4819d3dc-7230-472b-8c56-ab077c41a572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2762352-174a-4d1e-8c5d-d00a92c66c5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube"}}
	{"specversion":"1.0","id":"8779b695-cd77-46f3-8627-9cc1286fc0fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6e7444c-51e5-4332-826f-5f9c67e83fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-901000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-901000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (93.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-308000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-308000 --driver=hyperkit : (40.276958404s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-310000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-310000 --driver=hyperkit : (41.335180735s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-308000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-310000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-310000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-310000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-310000: (5.2989777s)
helpers_test.go:175: Cleaning up "first-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-308000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-308000: (5.324576008s)
--- PASS: TestMinikubeProfile (93.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-985000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-985000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (18.363819982s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-985000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-985000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-996000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0610 19:38:39.347523    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-996000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.312041843s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-985000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-985000 --alsologtostderr -v=5: (2.382087249s)
--- PASS: TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-996000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-996000: (8.421929472s)
--- PASS: TestMountStart/serial/Stop (8.42s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (42.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-996000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-996000: (41.688118835s)
--- PASS: TestMountStart/serial/RestartStopped (42.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-996000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-353000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0610 19:40:36.285287    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-353000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m9.532648328s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-353000 -- rollout status deployment/busybox: (3.765507504s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-4hdtl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-fznn5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-4hdtl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-fznn5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-4hdtl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-fznn5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-4hdtl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-4hdtl -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-fznn5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-353000 -- exec busybox-fc5497c4f-fznn5 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (67.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-353000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-353000 -v 3 --alsologtostderr: (1m6.975109701s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (67.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-353000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp testdata/cp-test.txt multinode-353000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000:/home/docker/cp-test.txt multinode-353000-m02:/home/docker/cp-test_multinode-353000_multinode-353000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test_multinode-353000_multinode-353000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000:/home/docker/cp-test.txt multinode-353000-m03:/home/docker/cp-test_multinode-353000_multinode-353000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test_multinode-353000_multinode-353000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp testdata/cp-test.txt multinode-353000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt multinode-353000:/home/docker/cp-test_multinode-353000-m02_multinode-353000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test_multinode-353000-m02_multinode-353000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m02:/home/docker/cp-test.txt multinode-353000-m03:/home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test_multinode-353000-m02_multinode-353000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp testdata/cp-test.txt multinode-353000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile537174127/001/cp-test_multinode-353000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt multinode-353000:/home/docker/cp-test_multinode-353000-m03_multinode-353000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000 "sudo cat /home/docker/cp-test_multinode-353000-m03_multinode-353000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 cp multinode-353000-m03:/home/docker/cp-test.txt multinode-353000-m02:/home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 ssh -n multinode-353000-m02 "sudo cat /home/docker/cp-test_multinode-353000-m03_multinode-353000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 node stop m03: (2.348745012s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status: exit status 7 (259.317821ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr: exit status 7 (258.275258ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-353000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-353000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:43:12.241915    9830 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:43:12.242198    9830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:43:12.242204    9830 out.go:304] Setting ErrFile to fd 2...
	I0610 19:43:12.242208    9830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:43:12.242379    9830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:43:12.242565    9830 out.go:298] Setting JSON to false
	I0610 19:43:12.242587    9830 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:43:12.242630    9830 notify.go:220] Checking for updates...
	I0610 19:43:12.242892    9830 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:43:12.242910    9830 status.go:255] checking status of multinode-353000 ...
	I0610 19:43:12.243278    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.243338    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.252420    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53494
	I0610 19:43:12.252787    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.253216    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.253250    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.253455    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.253566    9830 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:43:12.253648    9830 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.253714    9830 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 9523
	I0610 19:43:12.254928    9830 status.go:330] multinode-353000 host status = "Running" (err=<nil>)
	I0610 19:43:12.254949    9830 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:43:12.255178    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.255198    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.263865    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53496
	I0610 19:43:12.264455    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.264931    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.264979    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.265267    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.265429    9830 main.go:141] libmachine: (multinode-353000) Calling .GetIP
	I0610 19:43:12.265511    9830 host.go:66] Checking if "multinode-353000" exists ...
	I0610 19:43:12.265757    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.265778    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.274246    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53498
	I0610 19:43:12.274683    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.275129    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.275193    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.275545    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.275710    9830 main.go:141] libmachine: (multinode-353000) Calling .DriverName
	I0610 19:43:12.275911    9830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:43:12.275936    9830 main.go:141] libmachine: (multinode-353000) Calling .GetSSHHostname
	I0610 19:43:12.276080    9830 main.go:141] libmachine: (multinode-353000) Calling .GetSSHPort
	I0610 19:43:12.276235    9830 main.go:141] libmachine: (multinode-353000) Calling .GetSSHKeyPath
	I0610 19:43:12.276390    9830 main.go:141] libmachine: (multinode-353000) Calling .GetSSHUsername
	I0610 19:43:12.276528    9830 sshutil.go:53] new ssh client: &{IP:192.169.0.19 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000/id_rsa Username:docker}
	I0610 19:43:12.310131    9830 ssh_runner.go:195] Run: systemctl --version
	I0610 19:43:12.314411    9830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:43:12.326132    9830 kubeconfig.go:125] found "multinode-353000" server: "https://192.169.0.19:8443"
	I0610 19:43:12.326164    9830 api_server.go:166] Checking apiserver status ...
	I0610 19:43:12.326202    9830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 19:43:12.338069    9830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup
	W0610 19:43:12.346471    9830 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1866/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 19:43:12.346528    9830 ssh_runner.go:195] Run: ls
	I0610 19:43:12.349983    9830 api_server.go:253] Checking apiserver healthz at https://192.169.0.19:8443/healthz ...
	I0610 19:43:12.353128    9830 api_server.go:279] https://192.169.0.19:8443/healthz returned 200:
	ok
	I0610 19:43:12.353138    9830 status.go:422] multinode-353000 apiserver status = Running (err=<nil>)
	I0610 19:43:12.353148    9830 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:43:12.353159    9830 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:43:12.353400    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.353421    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.365975    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53502
	I0610 19:43:12.366341    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.366646    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.366656    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.366847    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.366954    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:43:12.367038    9830 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.367113    9830 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 9545
	I0610 19:43:12.368348    9830 status.go:330] multinode-353000-m02 host status = "Running" (err=<nil>)
	I0610 19:43:12.368356    9830 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:43:12.368608    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.368630    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.377245    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53504
	I0610 19:43:12.377640    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.377962    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.377981    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.378180    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.378315    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetIP
	I0610 19:43:12.378408    9830 host.go:66] Checking if "multinode-353000-m02" exists ...
	I0610 19:43:12.378668    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.378690    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.387200    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53506
	I0610 19:43:12.387589    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.387927    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.387941    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.388166    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.388271    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .DriverName
	I0610 19:43:12.388421    9830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 19:43:12.388432    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHHostname
	I0610 19:43:12.388512    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHPort
	I0610 19:43:12.388594    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHKeyPath
	I0610 19:43:12.388672    9830 main.go:141] libmachine: (multinode-353000-m02) Calling .GetSSHUsername
	I0610 19:43:12.388745    9830 sshutil.go:53] new ssh client: &{IP:192.169.0.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19046-5942/.minikube/machines/multinode-353000-m02/id_rsa Username:docker}
	I0610 19:43:12.420679    9830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 19:43:12.430901    9830 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:43:12.430963    9830 status.go:255] checking status of multinode-353000-m03 ...
	I0610 19:43:12.431270    9830 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:43:12.431292    9830 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:43:12.440117    9830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53509
	I0610 19:43:12.440534    9830 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:43:12.440931    9830 main.go:141] libmachine: Using API Version  1
	I0610 19:43:12.440941    9830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:43:12.441151    9830 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:43:12.441265    9830 main.go:141] libmachine: (multinode-353000-m03) Calling .GetState
	I0610 19:43:12.441359    9830 main.go:141] libmachine: (multinode-353000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:43:12.441424    9830 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid from json: 9620
	I0610 19:43:12.442633    9830 main.go:141] libmachine: (multinode-353000-m03) DBG | hyperkit pid 9620 missing from process table
	I0610 19:43:12.442651    9830 status.go:330] multinode-353000-m03 host status = "Stopped" (err=<nil>)
	I0610 19:43:12.442657    9830 status.go:343] host is not running, skipping remaining checks
	I0610 19:43:12.442664    9830 status.go:257] multinode-353000-m03 status: &{Name:multinode-353000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-353000 stop: (16.627064262s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status: exit status 7 (80.330615ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-353000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr: exit status 7 (80.206618ms)

                                                
                                                
-- stdout --
	multinode-353000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-353000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 19:52:51.432214   10138 out.go:291] Setting OutFile to fd 1 ...
	I0610 19:52:51.432485   10138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:52:51.432491   10138 out.go:304] Setting ErrFile to fd 2...
	I0610 19:52:51.432495   10138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 19:52:51.432685   10138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-5942/.minikube/bin
	I0610 19:52:51.432856   10138 out.go:298] Setting JSON to false
	I0610 19:52:51.432892   10138 mustload.go:65] Loading cluster: multinode-353000
	I0610 19:52:51.432929   10138 notify.go:220] Checking for updates...
	I0610 19:52:51.433195   10138 config.go:182] Loaded profile config "multinode-353000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 19:52:51.433212   10138 status.go:255] checking status of multinode-353000 ...
	I0610 19:52:51.433577   10138 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:51.433634   10138 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:51.442783   10138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53923
	I0610 19:52:51.443151   10138 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:51.443570   10138 main.go:141] libmachine: Using API Version  1
	I0610 19:52:51.443589   10138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:51.443802   10138 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:51.443933   10138 main.go:141] libmachine: (multinode-353000) Calling .GetState
	I0610 19:52:51.444041   10138 main.go:141] libmachine: (multinode-353000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:52:51.444090   10138 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid from json: 10002
	I0610 19:52:51.445054   10138 main.go:141] libmachine: (multinode-353000) DBG | hyperkit pid 10002 missing from process table
	I0610 19:52:51.445095   10138 status.go:330] multinode-353000 host status = "Stopped" (err=<nil>)
	I0610 19:52:51.445103   10138 status.go:343] host is not running, skipping remaining checks
	I0610 19:52:51.445110   10138 status.go:257] multinode-353000 status: &{Name:multinode-353000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 19:52:51.445131   10138 status.go:255] checking status of multinode-353000-m02 ...
	I0610 19:52:51.445376   10138 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0610 19:52:51.445398   10138 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0610 19:52:51.453907   10138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53925
	I0610 19:52:51.454230   10138 main.go:141] libmachine: () Calling .GetVersion
	I0610 19:52:51.454563   10138 main.go:141] libmachine: Using API Version  1
	I0610 19:52:51.454578   10138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 19:52:51.454768   10138 main.go:141] libmachine: () Calling .GetMachineName
	I0610 19:52:51.454893   10138 main.go:141] libmachine: (multinode-353000-m02) Calling .GetState
	I0610 19:52:51.455022   10138 main.go:141] libmachine: (multinode-353000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0610 19:52:51.455064   10138 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid from json: 10028
	I0610 19:52:51.455995   10138 main.go:141] libmachine: (multinode-353000-m02) DBG | hyperkit pid 10028 missing from process table
	I0610 19:52:51.456024   10138 status.go:330] multinode-353000-m02 host status = "Stopped" (err=<nil>)
	I0610 19:52:51.456030   10138 status.go:343] host is not running, skipping remaining checks
	I0610 19:52:51.456038   10138 status.go:257] multinode-353000-m02 status: &{Name:multinode-353000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-353000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-353000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m13.586907905s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-353000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-353000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-353000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-353000-m02 --driver=hyperkit : exit status 14 (423.240423ms)

                                                
                                                
-- stdout --
	* [multinode-353000-m02] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-353000-m02' is duplicated with machine name 'multinode-353000-m02' in profile 'multinode-353000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-353000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-353000-m03 --driver=hyperkit : (39.742805425s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-353000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-353000: exit status 80 (285.167015ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-353000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-353000-m03 already exists in multinode-353000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-353000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-353000-m03: (7.616602451s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.13s)

                                                
                                    
x
+
TestPreload (233.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-420000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-420000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (2m48.699792318s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-420000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-420000 image pull gcr.io/k8s-minikube/busybox: (2.483480619s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-420000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-420000: (8.386673128s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-420000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-420000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (48.510485185s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-420000 image list
helpers_test.go:175: Cleaning up "test-preload-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-420000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-420000: (5.285322302s)
--- PASS: TestPreload (233.52s)

                                                
                                    
x
+
TestSkaffold (232.87s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1208585265 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1208585265 version: (1.501133541s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-937000 --memory=2600 --driver=hyperkit 
E0610 20:05:36.314917    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-937000 --memory=2600 --driver=hyperkit : (2m35.844862637s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1208585265 run --minikube-profile skaffold-937000 --kube-context skaffold-937000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1208585265 run --minikube-profile skaffold-937000 --kube-context skaffold-937000 --status-check=true --port-forward=false --interactive=false: (56.948013944s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7bf96b4b47-kxvfc" [9ef81d02-04e0-47db-8dcc-002db4304b34] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00406727s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-64b5d855f6-qxxhc" [5021014e-03ef-4db5-beda-a264e7095154] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004783145s
helpers_test.go:175: Cleaning up "skaffold-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-937000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-937000: (5.274186945s)
--- PASS: TestSkaffold (232.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1145002508 start -p running-upgrade-360000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1145002508 start -p running-upgrade-360000 --memory=2200 --vm-driver=hyperkit : (53.703845364s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-360000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0610 20:13:56.872147    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:56.878473    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:56.888934    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:56.909564    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:56.950947    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:57.032553    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:57.193138    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:57.514270    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:58.154561    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:13:59.435093    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:14:01.995764    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-360000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (27.528838625s)
helpers_test.go:175: Cleaning up "running-upgrade-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-360000
E0610 20:14:07.117052    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-360000: (5.338829139s)
--- PASS: TestRunningBinaryUpgrade (88.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (236.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0610 20:14:17.358338    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.155195727s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-812000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-812000: (2.425976294s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-812000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-812000 status --format={{.Host}}: exit status 7 (104.712885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperkit : (2m33.653548092s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-812000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (575.364986ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-812000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-812000
	    minikube start -p kubernetes-upgrade-812000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8120002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-812000 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-812000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=hyperkit : (23.357606497s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-812000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-812000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-812000: (5.291127986s)
--- PASS: TestKubernetesUpgrade (236.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.28s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current369404510/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current369404510/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current369404510/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current369404510/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1176574856/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1176574856/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1176574856/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1176574856/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2709278638 start -p stopped-upgrade-266000 --memory=2200 --vm-driver=hyperkit 
E0610 20:15:18.802215    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:15:36.404546    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2709278638 start -p stopped-upgrade-266000 --memory=2200 --vm-driver=hyperkit : (50.580044183s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2709278638 -p stopped-upgrade-266000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.2709278638 -p stopped-upgrade-266000 stop: (8.243235013s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-266000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0610 20:16:40.723177    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-266000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (35.658650727s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-266000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-266000: (2.650636269s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.65s)

                                                
                                    
x
+
TestPause/serial/Start (50.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-705000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-705000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (50.67836641s)
--- PASS: TestPause/serial/Start (50.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-705000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-705000 --alsologtostderr -v=1 --driver=hyperkit : (41.521522887s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (710.139294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-991000] minikube v1.33.1 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-5942/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-5942/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit : (38.458271882s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-705000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-705000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-705000 --output=json --layout=cluster: exit status 2 (191.480572ms)

                                                
                                                
-- stdout --
	{"Name":"pause-705000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-705000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.19s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-705000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-705000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-705000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-705000 --alsologtostderr -v=5: (5.23867029s)
--- PASS: TestPause/serial/DeletePaused (5.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m30.829765072s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit 
E0610 20:18:56.870445    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit : (16.750906379s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-991000 status -o json: exit status 2 (161.099265ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-991000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-991000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-991000: (2.535224066s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --no-kubernetes --driver=hyperkit : (21.305420975s)
--- PASS: TestNoKubernetes/serial/Start (21.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet"
E0610 20:19:24.564069    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (132.838647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-991000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-991000: (2.382080041s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-991000 --driver=hyperkit : (19.407852714s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-991000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (130.380174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m3.516296414s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9bqzb" [bb324cb4-1ed2-427b-b6b3-065f862b41a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9bqzb" [bb324cb4-1ed2-427b-b6b3-065f862b41a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003810332s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0610 20:20:36.404105    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m13.559492355s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2k6fp" [609bfb02-cb03-4f5d-9f1d-f959736183f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003628613s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2h6ph" [3e88b609-c980-4ae4-b610-379f78d19068] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2h6ph" [3e88b609-c980-4ae4-b610-379f78d19068] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004086816s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m4.831180387s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4bchh" [0cb48030-84e1-4a9b-b47e-44fe9cd2b862] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004598652s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bpvdv" [948ec133-9a76-45c3-b9de-559e860c737a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bpvdv" [948ec133-9a76-45c3-b9de-559e860c737a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.002772467s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (62.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m2.007186478s)
--- PASS: TestNetworkPlugins/group/false/Start (62.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hgjmh" [e13c1148-8c56-4d5f-832f-acbd4733a4fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hgjmh" [e13c1148-8c56-4d5f-832f-acbd4733a4fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002229689s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (54.498708447s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pr7k5" [6232e1e8-9175-42c2-b84f-6f57ee6f298e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pr7k5" [6232e1e8-9175-42c2-b84f-6f57ee6f298e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003946265s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0610 20:23:56.869445    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m2.974742415s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jjbnm" [653781d0-2f4c-48ca-a3b4-25a61be43509] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jjbnm" [653781d0-2f4c-48ca-a3b4-25a61be43509] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003579345s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (55.960905979s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vc8lw" [1bec7fef-892f-4dff-a0c2-d9ca44ffec0a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003672222s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7h7bc" [82c1b97a-5549-42f5-ac60-8c5a3986c6a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 20:25:02.658020    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.663242    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.673526    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.694135    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.736128    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.816520    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:02.978369    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:03.300480    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:03.940676    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:05.222045    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:25:07.782162    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7h7bc" [82c1b97a-5549-42f5-ac60-8c5a3986c6a4] Running
E0610 20:25:12.903411    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00266827s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-px5nl" [30106f6e-8a53-4285-b482-63582264e655] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-px5nl" [30106f6e-8a53-4285-b482-63582264e655] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004604614s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-335000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (54.886710582s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0610 20:25:36.401777    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-269000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0610 20:25:54.333843    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:25:55.615983    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:25:58.176405    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:26:03.297596    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:26:13.538675    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:26:24.586252    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-269000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m23.753316465s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-335000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-335000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h4cbk" [2502f1b3-fd78-411a-977a-3847d1c1651e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h4cbk" [2502f1b3-fd78-411a-977a-3847d1c1651e] Running
E0610 20:26:34.019803    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004648443s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-335000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-335000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0610 20:39:55.083229    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:40:02.682466    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-879000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:27:05.925483    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
E0610 20:27:15.004966    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:27:26.409450    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
E0610 20:27:34.473885    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.480201    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.490309    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.512398    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.552907    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.633146    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:34.794470    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:35.115801    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:35.755961    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:37.036165    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:39.598290    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:44.718696    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:27:46.534315    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:27:54.959707    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-879000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.1: (58.420370002s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-879000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6b338c2-9711-4224-adda-351781ca9758] Pending
helpers_test.go:344: "busybox" [e6b338c2-9711-4224-adda-351781ca9758] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6b338c2-9711-4224-adda-351781ca9758] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003301475s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-879000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-879000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-879000 --alsologtostderr -v=3
E0610 20:28:07.371853    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-879000 --alsologtostderr -v=3: (8.488843891s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-879000 -n no-preload-879000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-879000 -n no-preload-879000: exit status 7 (68.765812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (287.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-879000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:28:15.440665    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-879000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.1: (4m47.065004578s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-879000 -n no-preload-879000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (287.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-269000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9130701b-9b38-4ab2-acbc-04eb57acdd02] Pending
helpers_test.go:344: "busybox" [9130701b-9b38-4ab2-acbc-04eb57acdd02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9130701b-9b38-4ab2-acbc-04eb57acdd02] Running
E0610 20:28:24.409434    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.415034    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.425476    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.445838    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.486129    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.566340    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:24.728034    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:25.052743    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:25.694545    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:26.976338    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004399616s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-269000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-269000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-269000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-269000 --alsologtostderr -v=3
E0610 20:28:29.537045    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:34.657266    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-269000 --alsologtostderr -v=3: (8.402789652s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-269000 -n old-k8s-version-269000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-269000 -n old-k8s-version-269000: exit status 7 (70.966986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-269000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (400.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-269000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0610 20:28:36.928002    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:28:39.490227    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 20:28:44.908086    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:28:56.400869    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:28:56.895868    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:28:58.867744    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:58.873119    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:58.883395    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:58.903570    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:58.943722    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:59.024864    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:59.186570    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:28:59.507291    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:00.147425    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:01.429399    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:03.991501    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:05.388491    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:29:09.113587    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:19.353701    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:29.292451    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
E0610 20:29:39.833777    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:29:46.348513    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:29:55.083919    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.089697    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.101208    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.121754    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.163753    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.243865    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.404784    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:55.725291    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:56.366357    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:29:57.647735    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:30:00.209692    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:30:02.684593    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:30:05.330073    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:30:15.571841    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:30:18.320845    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:30:19.950514    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:30:20.795155    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:30:24.260446    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.265632    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.276454    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.296801    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.338185    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.418316    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.580261    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:24.901759    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:25.542053    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:26.823708    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:29.384241    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:30.375568    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
E0610 20:30:34.504572    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:36.052642    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:30:36.427967    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 20:30:44.746249    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:30:53.073380    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:31:05.226290    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:31:08.268340    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:31:17.012894    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:31:20.768330    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:31:28.118863    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.125143    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.136018    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.157862    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.198727    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.279314    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.440438    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:28.761619    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:29.402110    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:30.682489    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:33.244072    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:38.364613    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:31:42.715976    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:31:45.444379    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
E0610 20:31:46.187802    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:31:48.605582    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:32:09.086789    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
E0610 20:32:13.132065    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
E0610 20:32:34.474638    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
E0610 20:32:38.934358    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:32:50.047252    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-269000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m39.873898236s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-269000 -n old-k8s-version-269000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (400.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tz2qp" [34612bcc-a389-4d4d-888f-8a8792c93dc2] Running
E0610 20:33:02.161310    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/custom-flannel-335000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00452285s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tz2qp" [34612bcc-a389-4d4d-888f-8a8792c93dc2] Running
E0610 20:33:08.108463    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00295291s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-879000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-879000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-879000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-879000 -n no-preload-879000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-879000 -n no-preload-879000: exit status 2 (159.896929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-879000 -n no-preload-879000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-879000 -n no-preload-879000: exit status 2 (163.215519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-879000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-879000 -n no-preload-879000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-879000 -n no-preload-879000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-257000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:33:24.408457    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:33:52.107924    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/false-335000/client.crt: no such file or directory
E0610 20:33:56.893517    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/skaffold-937000/client.crt: no such file or directory
E0610 20:33:58.866125    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
E0610 20:34:11.968539    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-257000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.1: (1m1.525109198s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-257000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd8e7992-be60-4e2a-927b-c0148fbc1f18] Pending
helpers_test.go:344: "busybox" [cd8e7992-be60-4e2a-927b-c0148fbc1f18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0610 20:34:26.557077    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/enable-default-cni-335000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cd8e7992-be60-4e2a-927b-c0148fbc1f18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004625715s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-257000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-257000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-257000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-257000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-257000 --alsologtostderr -v=3: (8.409880131s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-257000 -n embed-certs-257000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-257000 -n embed-certs-257000: exit status 7 (68.940255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-257000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-257000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:34:55.083462    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:35:02.683785    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/auto-335000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-257000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.1: (4m48.467184033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-257000 -n embed-certs-257000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l5785" [27da97be-81b3-49a8-a83f-4440c5f5eb67] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004546668s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l5785" [27da97be-81b3-49a8-a83f-4440c5f5eb67] Running
E0610 20:35:22.774178    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/flannel-335000/client.crt: no such file or directory
E0610 20:35:24.259212    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003324308s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-269000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-269000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-269000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-269000 -n old-k8s-version-269000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-269000 -n old-k8s-version-269000: exit status 2 (165.324641ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-269000 -n old-k8s-version-269000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-269000 -n old-k8s-version-269000: exit status 2 (164.962342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-269000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-269000 -n old-k8s-version-269000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-269000 -n old-k8s-version-269000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-486000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:35:36.427255    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 20:35:51.949028    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:35:53.071281    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kindnet-335000/client.crt: no such file or directory
E0610 20:36:28.116645    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/kubenet-335000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-486000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.1: (54.000040631s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-486000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa07fc27-2fe2-466c-a520-5dbad365c690] Pending
helpers_test.go:344: "busybox" [fa07fc27-2fe2-466c-a520-5dbad365c690] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa07fc27-2fe2-466c-a520-5dbad365c690] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003598518s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-486000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-486000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-486000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-486000 --alsologtostderr -v=3
E0610 20:36:45.442451    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/calico-335000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-486000 --alsologtostderr -v=3: (8.428660087s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-486000 -n default-k8s-diff-port-486000: exit status 7 (67.884847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-486000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-559000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:39:17.217747    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-559000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.1: (48.71260746s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fc6gc" [5f3cdbaa-0732-4026-90e5-91ac179272fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00359978s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fc6gc" [5f3cdbaa-0732-4026-90e5-91ac179272fc] Running
E0610 20:39:39.922387    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/old-k8s-version-269000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003216635s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-257000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-257000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-257000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-257000 -n embed-certs-257000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-257000 -n embed-certs-257000: exit status 2 (173.996571ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-257000 -n embed-certs-257000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-257000 -n embed-certs-257000: exit status 2 (177.724772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-257000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-257000 -n embed-certs-257000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-257000 -n embed-certs-257000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-559000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-559000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-559000 --alsologtostderr -v=3: (8.40789315s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-559000 -n newest-cni-559000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-559000 -n newest-cni-559000: exit status 7 (69.706255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-559000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (28.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-559000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.1
E0610 20:40:24.258796    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/bridge-335000/client.crt: no such file or directory
E0610 20:40:36.425960    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/functional-192000/client.crt: no such file or directory
E0610 20:40:39.137939    6485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19046-5942/.minikube/profiles/no-preload-879000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-559000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.1: (28.660333703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-559000 -n newest-cni-559000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (28.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-559000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-559000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-559000 -n newest-cni-559000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-559000 -n newest-cni-559000: exit status 2 (157.82769ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-559000 -n newest-cni-559000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-559000 -n newest-cni-559000: exit status 2 (158.013461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-559000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-559000 -n newest-cni-559000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-559000 -n newest-cni-559000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.81s)

                                                
                                    

Test skip (19/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-335000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-335000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-335000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-335000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335000"

                                                
                                                
----------------------- debugLogs end: cilium-335000 [took: 5.749397219s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-335000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-335000
--- SKIP: TestNetworkPlugins/group/cilium (6.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-972000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard